Author Archive

Complex Cloud Contracting

Posted on March 26th, 2014 by

The greatest pleasure, and the greatest challenge, of being a privacy lawyer is the need to be both an ethicist and a pragmatist.  Oftentimes, I find myself advising companies not just on what is the legal thing to do, but what is the right thing to do (and, no, the two aren’t always one and the same); while, on other occasions, my task is to find solutions to real or imagined business impediments presented by the law.

Nowhere is this dichotomy more apparent than when advising on cloud deals.  The future is cloud and mobile, as someone once said.  So it seems an oddity that privacy laws are all too often interpreted in ways that impair cloud adoption and utilization.  This oddity is perhaps most apparent when negotiating cloud deals, where two parties who are in commercial agreement and want to realize the benefits of a cloud relationship are unable to reach contractual agreement over basic data protection terms.

This failure to reach contractual agreement is so often due to a misunderstanding, or (sometimes) a perverse interpretation of, EU data protection requirements, that I thought I’d use this post to set the record straight.  The following is necessarily broad brush, but hopefully paints a picture of the key things to consider in cloud deals and how to address them:

1.  What data protection terms does the law require?  In most cloud relationships, the service provider will be a “data processor” and its client the “data controller”.  In this type of relationship, the client is legally obligated to impose two key requirements on the service provider – first, that the service provider must act only on its instructions; second, that the service provider must have in place “appropriate” security.  There’s no point negotiating these.  Just accept them as a legal necessity and move on.

2.  What about Germany?  Germany is a huge market for cloud contracting, but its data privacy laws are notoriously strict.  If you’re a cloud provider rolling out a pan-EU service, you have to address German data privacy requirements as part of your offering or risk not doing business in a major EU market.  In addition to the two requirements just described above, Germany also mandates the need for precise “technical and organisational” security measures to be in place for the cloud service and the granting of audit rights in favour of the cloud client.  These need to be addressed either within the standard EU ts&cs for the cloud service or, alternatively, by way of bespoke terms just for German deals.

3.  Audit rights???  Yes, that’s right. Certain EU territories, like Germany, expect that cloud clients should have audit rights over their cloud providers.  To most cloud providers, the idea of granting audit rights under their standard terms is an anathema.  Imagine a provider with thousands of clients – you only need a small fraction of those clients to exercise audit rights at any one time for the business disruption to be overwhelming.  Not only that, but allowing multiple clients onsite and into server rooms for audit purposes itself creates a huge security risk. So what’s the solution?  A common one is that many cloud service providers have these days been independently audited against ISO and SSAE standards.  Committing in the contract to maintain recognised third party audit certifications throughout the duration of the cloud deal – possibly even offering to provide a copy of the audit certification or a summary of the audit report – will (and rightly should) satisfy many cloud clients.

4.  The old “European data center” chestnut.  I’ve been in more than a few negotiations where there’s been a mistaken belief that the cloud service provider needs to host all data in Europe in order for the service to be “legal” under European data protection law.  This is a total fallacy.  Cloud service providers can (and, make no mistake, will) move data anywhere in the world – often in the interests of security, back-ups, support and cost efficiency.  What’s more, the law permits this – though it does require that some manner of legal “data export” solution first be implemented for data being transferred out of Europe.  There are a number of solutions available – from model clauses to safe harbor to Binding Corporate Rules.  Cloud clients need to check their service providers have one of these solutions in place and that it covers the data exports in question but, so long as they do, then there’s no reason why data cannot be moved around internationally for service-related reasons.

5.  Security.  The law requires cloud clients to ensure that their service providers have implemented “appropriate” security.  The thing is, cloud clients often aren’t best able to assess whether their cloud provider’s security is or is not “appropriate” – one of the commonly cited reasons for outsourcing to the cloud in the first place is to take the benefit of the greater security expertise that cloud providers offer.  To further complicate matters, some territories – like Germany, Poland and Spain – have precise data security rules.  It’s highly unlikely that a cloud provider will ever tailor its global IT infrastructure to address nationally-driven requirements of just one or two territories, so outside of heavily-regulated sectors, there’s little point trying to negotiate for those.  Instead, cloud clients should look to other security assurances the cloud provider can offer – most notably, whether it maintains ISO and SSAE certification (see above!).

6.  Subcontracting.  Cloud suppliers subcontract: it’s a fact of life.  Whether to their own group affiliates or externally to third party suppliers, the likelihood is that the party concluding the cloud contracting will not be (solely) responsible for performing it.  The question inevitably arises as to whether the supplier needs its client’s consent to subcontract: the short answer is, generally, yes, but there’s no reason why a general consent to subcontract can’t be obtained upfront in the contract.  At the same time, however, the cloud customer will want assurances that its data won’t be outsourced to a subcontractor with lax data protection standards, so any such consent should be carefully conditioned on the cloud provider flowing down its data protection responsibilities and committing to take responsibility for managing the subcontractor’s compliance.

7.  What other terms should be in a cloud contract?  In addition to the points already discussed, it’s critical that cloud providers have in place a robust data breach response mechanism – so that they detect security intrusions asap and inform the cloud client promptly, giving it the opportunity to manage its own fallout from the breach and address any legal data breach notification requirements it may be under.  In addition, cloud providers should be expected to inform their clients (where legally permitted to do so) about any notices or complaints they receive concerning their hosting or processing of their client’s data – the client will generally be on the hook for responding to these, so it’s important it receives these notices promptly giving it adequate time to respond.

So there’s no reason that data protection should be holding those deals up!  All of the issues described above have straightforward solutions that should be palatable to both cloud clients and providers alike.  Remember: good data protection and good business are not mutually exclusive – but realistic, compatible goals.

How do EU and US privacy regimes compare?

Posted on March 5th, 2014 by

As an EU privacy professional working in the US, one of the things that regularly fascinates me is each continent’s misperception of the other’s privacy rules.  Far too often have I heard EU privacy professionals (who really should know better) mutter something like “The US doesn’t have a privacy law” in conversation; equally, I’ve heard US colleagues talk about the EU’s rules as being “nuts” without understanding the cultural sensitivities that drive European laws.

So I thought it would be worth dedicating a few lines to compare and contrast the different regimes, principally to highlight that, yes, they are indeed different, but, no, you cannot draw a conclusion from these differences that one regime is “better” (whatever that means) than the other.  You can think of what follows as a kind of brief 101 in EU/US privacy differences.

1.  Culturally, there is a stronger expectation of privacy in the EU.  It’s often said that there is a stronger cultural expectation of privacy in the EU than the US.  Indeed, that’s probably true.   Privacy in the EU is protected as a “fundamental right” under the European Union’s Charter of Fundamental Rights – essentially, it’s akin to a constitutional right for EU citizens.  Debates about privacy and data protection evoke as much emotion in the EU as do debates about gun control legislation in the US.

2.  Forget the myth: the US DOES have data protection laws.  It’s simply not true that the US doesn’t have data protection laws.  The difference is that, while the EU has an all-encompassing data protection framework (the Data Protection Directive) that applies across every Member State, across all sectors and across all types of data, the US has no directly analogous equivalent.  That’s not the same thing as saying the US has no privacy laws – it has an abundance of them!  From federal rules designed to deal with specific risk scenarios (for example, collection of child data online is regulated under the Children’s Online Privacy Protection Act), to sector-specific rules (Health Insurance Portability and Accountability Act for health-related information and the Gramm-Leach-Bliley Act for financial information), to state-driven rules (the California Online Privacy Protection Act in California, for example – California, incidentally, also protects individuals’ right to privacy under its constitution).  So the next time someone tells you that the US has no privacy law, don’t fall for it – comparing EU and US privacy rules is like comparing apples to a whole bunch of oranges.

3.  Class actions.  US businesses spend a lot of time worrying about class actions and, in the privacy realm, there have been multiple.  Countless times I’ve sat with US clients who agonise over their privacy policy drafting to ensure that the disclosures they make are sufficiently clear and transparent in order to avoid any accusation they may have misled consumers.  Successful class actions can run into the millions of $$$ and, with that much potential liability at stake, US businesses take this privacy compliance risk very seriously.  But when was the last time you heard of a successful class action in the EU?  For that matter, when was the last time you heard of ANY kind of award of meaningful damages to individuals for breaches of data protection law?

4.  Regulatory bark vs. bite.  So, in the absence of meaningful legal redress through the courts, what can EU citizens do to ensure their privacy rights are respected?  The short answer is complain to their national data protection authorities, and EU data protection authorities tend to be very interested and very vocal.  Bodies like the Article 29 Working Party, for example, pump out an enormous volume of regulatory guidance, as do certain national data protection authorities, like the UK Information Commissioner’s Office or the French CNIL. Over in the US, American consumers also have their own heavyweight regulatory champion in the form of Federal Trade Commission which, by using its powers to take enforcement against “unfair and deceptive practices” under the FTC Act, is getting ever more active in the realm of data protection enforcement.  And look at some of the settlements it has reached with high profile companies – settlements that, in some cases, have run in excess of US$20m and resulted in businesses having to subject themselves to 20 year compliance audits.  By contrast, however vocal EU DPAs are, their powers of enforcement are typically much more limited, with some even lacking the ability to fine.

So those are just some of the big picture differences, but there are so many more points of detail a well-informed privacy professional ought to know – like how the US notion of “personally identifiable information” contrasts with EU “personal data”, why the US model of relying on consent to legitimise data processing is less favoured in the EU, and what the similarities and differences are between US “fair information practice principles” and EU “data protection principles”.

That’s all for another time, but for now take away this:  while they may go about it in different ways, the EU and US each share a common goal of protecting individuals’ privacy rights.  Is either regime perfect?  No, but each could sure learn a lot from the other.




What a 21st Century Privacy Law Could – and Should – Achieve

Posted on January 22nd, 2014 by

It’s no secret that the EU’s proposed General Data Protection Regulation (GDPR) hangs in the balance. Some have even declared it dead (see here), though, to paraphrase Mark Twain, those reports are somewhat exaggerated. Nevertheless, 2014 will prove a pivotal year for privacy in the European Union: Either we’ll see some variant of the proposed regulation adopted in one form or another, or we’ll be heading back to the drawing board.

So much has already been said and written about what will happen if the GDPR is not adopted by May  that it does not need repeating here. Though, for my part, I’d be quite happy to return to the drawing board: Better, I think, to start again and design a good law than to adopt legislation for the sake of it—no matter how ill-suited it is to modern-day data processing standards.

With that in mind, I thought I’d reflect on what I think a fighting-fit 21st century data protection law ought to achieve, keeping in mind the ultimate aims of protecting citizens’ rights, promoting technological innovation and fostering economic growth:

1. A modern data privacy law should be simple, objectives-focused and achievable.  The GDPR is, quite simply, a lawyer’s playground, a lengthy document of breathtaking complexity that places far more emphasis on process than on outcome. It cannot possibly hope to be understood by the very stakeholders it aims to protect: European citizens. A modern data privacy law should be understandable by all—and especially by the very stakeholders whose interests it is intended to protect. Further, a modern privacy law needs to focus on outcomes. Ultimately, its success will be judged by whether it arrived at its destination (did it keep data private and secure?) not the journey by which it got there (how much paper did it create?).

2. A modern privacy law should recognize and reflect the role of the middleman.  Whether you’re a user of mobile services, the consumer Internet or cloud-based services, access to your data will in some way be controlled by an intermediary third party: the iOS, Android or Windows mobile platforms whose APIs control access to your device data, the web browser that blocks or accepts third-party tracking technologies by default or the cloud platform that provides the environment for remotely hosted data processing services. Yet these “middlemen” —for want of a better term—simply aren’t adequately reflected in either current or proposed EU privacy law, which instead prefers an outmoded binary world of “controllers” and “processors.” This means that, to date, we have largely relied on the goodwill of platform providers—Are they controllers? Are they processors?—to build controls and default settings into their platforms that prevent unwarranted access to our data by the applications we use. A modern data privacy law would recognize and formalize the important role played by these middlemen, requiring them to step up to the challenge of protecting our data.

3. A modern data privacy law would categorize sensitive data by reference to the data we REALLY care about.  Europe’s definition of sensitive—or “special”—personal data has long been a mystery to me. Do we really still expect information about an individual’s trade union membership or political beliefs to be categorized as sensitive when their bank account details and data about their children are not treated as sensitive in Europe—unlike the U.S.? A modern data privacy law would impose a less rigid concept of sensitive personal data, one that takes a greater account of context and treats as sensitive the information that people really care about—and not the information they don’t.

4. A modern privacy law would encourage anonymization and pseudonymization.  Sure, we all know that true anonymization is virtually impossible, that if you have a large enough dataset of anonymized data and compare it with data from this source and that source, eventually you might be able to actually identify someone. But is that really a good enough reason to expect organizations to treat anonymized and pseudonymized data as though they are still “personal” data, with all the regulatory consequences that entails? From a policy perspective, this just disincentivises anonymization and pseudonymization—why bother, if it doesn’t reduce regulatory burden? That’s plainly the wrong result. A modern data privacy law would recognize that not all data is created equal, and that appropriately anonymized and pseudonymized data deserve lesser restrictions as to their use—or reuse—and disclosure. Without this, we cannot hope to realize the full benefits of Big Data and the societal advances it promises to deliver.

5. A modern privacy law would not impose unrealistic restrictions on global movements of data.  The Internet has happened; get over it. Data will forever more move internationally, left, right, up and down across borders, and no amount of regulation and red tape is going to stop that. Nor will Europe’s bizarre obsession with model clauses. And when it comes to surveillance, law enforcement will always do what law enforcement will do: Whilst reigning in excessive government surveillance is undoubtedly crucial, that ultimately is an issue to be resolved at a political level, not at the business regulatory level. A modern data privacy law should concern itself not with where data is processed but why it is processed and how it is protected. So long as data is kept secure and processed in accordance with the controller’s legal obligations and in keeping with its data subjects’ reasonable expectations, it should be free to process that data wherever in the world it likes. Maintaining unrealistic restrictions on international data exports at best achieves little—organizations will do it any way using check-box solutions like model clauses—and, at worst, will adversely impact critical technology developments like the cloud.

6. A modern privacy law would recognize that consent is NOT the best way to protect people’s privacy.  I’ve argued this before, but consent does not deliver the level of protection that many think it does. Instead, it drives lazy, check-box compliance models—“he/she ticked the box, so now I can do whatever I like with their data.” A modern law would acknowledge that, while consent will always be an important weapon in the privacy arsenal, it should not be the weapon of choice. There must always be other ways of legitimizing data processing and, perhaps, other than in the context of sensitive personal information, these should be prioritized over consent. At the same time, if consent is to play a lesser role in legitimizing processing at the outset, then the rights given to individuals to object to processing of their data once it has begun must be bolstered—without this, you place too much responsibility in the hands of controllers to decide when and why to process data with no ability for individuals to restrain unwanted intrusions into their privacy. There’s a delicate balance to be struck, but a modern data privacy law would not shy away from finding this balance. Indeed, given the emergence of the Internet of Things, finding this balance is now more important than ever.

There’s so much more that could be said, and the above proposals represent just a handful of suggestions that any country looking to adopt new privacy laws—or reform existing ones—would be well-advised to consider. You can form your own views as to whether the EU’s proposed GDPR—or indeed any privacy law anywhere in the world—achieves these recommendations. If they don’t now, then they really should; otherwise, we’ll just be applying 20th-century thinking to a 21st-century world.

This post was first published on the IAPP’s Privacy Perspectives blog, available at


2013 a big year for privacy? You ain’t seen nothing yet!

Posted on December 31st, 2013 by

If you thought that 2013 was a big year for privacy, then prepare yourself: it was only the beginning.  Many of the privacy stories whose winding narratives began in 2013 will continue to take unexpected twists and turns throughout 2014, with several poised to reach dramatic conclusions – or otherwise spawn spin-offs and sequels.

Here are just a few of the stories likely to dominate the privacy headlines in 2014:

1.  EU data protection reform:  The Commission’s draft General Data Protection Regulation arrived with a bang in January 2012, proposing fines of up to 2% of global turnover for data protection breaches, a 24-hour data breach notification regime, and a controversial new right for individuals to have their data “forgotten” from the Internet, among many other things.  Heated debate about the pros and cons of these reforms continued into 2013, with the European Parliament’s LIBE Committee only voting on and publishing its position on the draft Regulation in October 2013 (missing two earlier deadlines).  All eyes then turned to the Council, expecting it to put forward its position on the draft Regulation sometime in December, only to discover that it had gotten hung up on the “one stop shop” principle and made little real progress at all.  With the original goal being to adopt the new Regulation before the European Parliamentary elections in May 2014, a real question mark now hangs over whether Europe will achieve this deadline – and what will happen if it doesn’t.

2.  NSA surveillance:  The biggest privacy story – if not the biggest news story – of 2013 concerned the leaks of classified documents from the US National Security Agency by its contractor, Edward Snowden.  The leaks revealed that the NSA had been collecting Internet users’ metadata from the servers of leading technology companies and from the cables that carry our Internet communications around the world. This story has had a profound effect in terms of raising individuals’ privacy awareness worldwide, impacting global political and trade relationships, and adding impetus to the European Union’s regulatory reform agenda.  With the Guardian newspaper recently declaring that it has so far revealed only about 1% of the materials Edward Snowden has disclosed to it – and British television broadcasting an “alternative” Christmas message from Edward Snowden on “Why privacy matters” – it’s safe to say that this is a story that will continue to headline throughout 2014, prompting the global privacy community to contemplate perhaps the most fundamental privacy question of all: to what extent, if at all, will we trade personal privacy in the interests of global security?

3.  Safe harbor: Regulators across several European territories have, for many years now, been grumbling about the “adequacy” of the EU/US safe harbor regime as a basis for exporting data from the European Union to the US.  The Snowden revelations have further fuelled this fire, ultimately leading to the European Commission publishing a set of 13 recommendations for restoring trust in safe harbor.  The Commission has set the US Department of Commerce an ambitious deadline of summer 2014 to address these recommendations – and raised the “nuclear” prospect that it may even suspend safe harbor if this does not happen.  With some 3,000+ US companies currently relying on safe harbor for their EU data exports, many US-led corporations will be watching this story very closely – and would be well-advised to begin contingency planning now…

4.  New technologies:  Ever-evolving technologies will continue to challenge traditional notions of data privacy throughout 2014.  In the past year alone, Big Data has bumped heads with the concepts of purpose limitation and data minimisation, the Internet of Things has highlighted the shortcomings of user consent in an everything-connected world, and the exponential growth of cloud technologies continue to demonstrate the absurdity of extra-EEA data export restrictions and their attendant solutions (Do model clauses really provide adequate protection? Tsch.) Quite aside from the issues presented by technologies like Google Glass and iPhone fingerprint recognition, who can say what other new devices, platforms and services we’ll see in 2014 – and how these will challenge the global privacy community to get creative and adapt accordingly.

5.  Global interoperability:  As at year end, there are close to 100 countries with data protection laws on their statute books, with new privacy laws either coming into effect or getting adopted in countries like Mexico, Australia and South Africa throughout 2013.  And there are still many more countries with data privacy bills under discussion or with new laws coming into effect throughout 2014 (Singapore being one example).  Legislators around the world are waking up to the need to adopt new statutory frameworks (or to reform existing ones) to respect individuals’ privacy – both in the interests of protecting their citizens but also, with the digital economy becoming ever more important, in order not to lose out to businesses looking for ‘safe’ countries to house their data processing operations.  All these new laws will continue to raise challenges in terms of global interoperability – how does an organization spread across multiple international territories comply with its manifold, and often varied, legal obligations while at the same time adopting globally consistent data protection policies, managed with limited internal resources?

6.  Coordinated enforcement:  In 2013, we’ve seen the first real example of cross-border privacy enforcement, with six data protection authorities (led by the CNIL) taking coordinated enforcement action against Google over the launch over its consolidated privacy policy across its various service lines.  With the limitations of national deterrents for data privacy breaches that exist for regulators in many territories (some cannot impose fines, while others can impose only limited fines) and continuing discussion about the need for “one stop shop” enforcement under the proposed General Data Protection Regulation, it seems likely that we’ll see more cooperation and coordinated enforcement by data protection authorities in 2014 and beyond.

2013 was undoubtedly an exciting year for data privacy, but 2014 promises so much more.  It won’t be enough for the privacy community just to know the law – we must each of us become privacy strategists if we are to do proper justice to protect the business and consumer stakeholders we represent.  We have exciting times ahead.

Happy New Year everyone!

Getting cookie consent throughout the EU – latest Working Party guidance

Posted on October 19th, 2013 by

Thinking back to the early days when Europe’s controversial “cookie consent” law first passed, many in the privacy community complained about lack of guidance on obtaining consent.  The law required them to get consent, but didn’t say how.

In response to this, legislators and regulators – at both an EU and a national level – responded that consent solutions should be market-led.  The thinking went that the online industry was better placed to innovate creative and unobtrusive ways to get consent than lawyers, regulators and legislative draftsmen.

As it transpired, this is precisely what happened.  In the four years since Europe adopted cookie consent, online operators have now evolved and embraced implied consent models across the EU to obtain their visitors’ consent to cookies.  However, this is not where the story ends.

In an opinion last week, the Article 29 Working Party published further guidance on obtaining cookie consent (“Working Document 02/2013 providing guidance on obtaining consent for cookies” – available here).   This supplements several previous opinions that, directly or indirectly, also address cookie consent requirements (see here, and here, and here, and here, for example).

The rationale behind the latest opinion, on the face of it, is to address the question: “what [cookie consent] implementation would be legally compliant for a website that operates across all EU Member States?”  But in answering this question, the guidance veers towards a level of conservatism that all but ensures it will never see widespread – let alone pan-European – adoption.

It doesn’t start off well: in discussing how a user can signify choice over whether or not to receive cookies, the guidance at one point states: “it could include a handwritten signature affixed at the bottom of a paper form“.

It then goes on to say that “consent has to be given before the processing starts … As a result a website should deliver a consent solution in which no cookies are set to user’s device … before that user has signalled their wishes regarding such cookies.”  In other words, the guidance indicates the need for a pop-up or a barrier page for users to click through before cookies can be set, harking back to the worst fears of industry at the time the cookie consent law was originally proposed.

When we’re talking about a fundamental human right, like privacy, the attraction of prior consent is obvious.  Unfortunately, it’s practically and technically very challenging.  However easy it sounds in theory (and it does sound easy, doesn’t it?), the realities are much more problematic.  For example, do you really require website operators to build two versions of their websites: one with cookies, and one without?  What happens to ‘free’ content on the web whose cost is subsidised by targeted advertising currently – who wants to return to a subscription-funded Internet?  If you’re a third party service provider, how do you guarantee prior consent when it is your customer (the website operator) who has the relationship with its visitors?

More importantly, prior consent is not what the e-Privacy Directive requires.  The word ‘prior’ never appears in the revised Article 5(3) of the e-Privacy Directive (the Article that imposes the consent requirement).  In fact, the word ‘prior’ was originally proposed, but was later dropped during the course of legislative passage.  Contrast this with Article 6(3), for example, which deals with processing of communications metadata (think PRISM) and DOES call for ‘prior’ consent.  Article 13 on unsolicited communications also uses the word ‘prior’ next to its requirement for consent.

What conclusions should we draw from this?  That’s a debate that lawyers, like me, have been having for a long time.  But, frankly, it’s all pretty academic.  Let’s deal instead in realities: if we were to be faced with cookie pop-ups or barrier pages on entry to EVERY website on the Internet, how quickly would we would become fatigued and simply click away the notices just to get rid of them?  What would that say about the validity of any ‘prior’ consents we provide?

Industry evolved implied consent as a solution that struck a balance between protecting individuals’ rights, addressing legal compliance and enabling online business.  Over time, it has done wonders to improve online tracking transparency and choice – implied consent has now become so widespread in the EU that even companies for whom cookies are their lifeblood, like Google, have implemented cookie consent transparency and choice mechanisms.

Critically, when done right, implied consent models fully satisfy the legal requirement that users’ consent must be “freely given, specific and informed”.  So here’s my suggestion: if you are looking to implement a cookie consent solution across Europe, don’t automatically jump to the most conservative standard that will put you out of alignment with your competitors and that, in most cases, will go further than national legislation requires.

Consider, instead, implied consent – but, if you do, embrace it properly:  a slight revision to your privacy policy and a new link to a cookie policy in the footer of your website won’t suffice.  Your implied consent model needs to provide prominent, meaningful notice and choice to visitors.  And to see how to do that, see our earlier post here.

Information Pollution and the Internet of Things

Posted on September 8th, 2013 by

Kevin Ashton, the man credited with coining the term “The Internet of Things” once said: “The Internet of Things has the potential to change the world, just as the Internet did. Maybe even more so.

This couldn’t be more true. The range of potential applications for the Internet of Things, from consumer electronics to energy efficiency and from supply chain management to traffic safety, is breathtaking. Today, there are 6 billion or so connected devices on the planet. By 2020, some estimate that figure will be in the range of 30 to 50 billion. Applying some very basic maths, That’s between 4 and 7 internet-connected “things” per person.

All this, of course, means vast levels of automated data generation, processing and sharing. Forget Big Data: we’re talking mind-blowingly Huge Data. That presents numerous challenges to traditional notions of privacy, and issues of applicability of law, transparency, choice and security have been (and will continue to be) debated at length.

One area that deserves particular attention is how we deal with data access in an everything-connected world. There’s a general notion in privacy that individuals should have a right to access their information – indeed, this right is hard-coded into EU law. But when so much information is collected – and across so many devices – how can we provide individuals with meaningful access to information in a way that is not totally overwhelming?

Consider a world where your car, your thermostat, your DVR, your phone, your security system, your portable health device, and your fridge are all trying to communicate information to you on a 24 x 7 x 365 basis: “This road’s busy, take that one instead”, “Why not lower your temperature by two degrees”, “That program you recorded is ready to watch”, “You forgot to take your medication today” and so on.

The problem will be one of information pollution: there will be just too much information available. How do you stop individuals feeling completely overwhelmed by this? The truth is that no matter how much we, as a privacy community, try to preserve rights for individuals to access as much data as possible, most will never explore their data beyond a very cursory, superficial level. We simply don’t have the energy or time.

So how do we deal with this challenge? The answer is to abstract away from the detail of the data and make readily available to individuals only the information they want to see, when they want to see it. Very few people want a level of detail typically of interest only to IT forensics experts in complex fraud cases – like what IP addresses they used to access a service or the version number of the software on their device. They want, instead, to have access to information that holds meaning for them, presented in a real, tangible and easy to digest way. For want of a better descriptor, the information needs to be presented in a way that is “accessible”.

This means information innovation will be the next big thing: maybe we’ll see innovators create consumer-facing dashboards that collect, sift and simplify vast amounts of information across their many connected devices, perhaps using behavioural, geolocation and spatial profiling techniques to tell consumers the information that matters to them at that point in time.

And if this all sounds a little too far-fetched, then check out services like Google Now and TripIt, to name just a couple. Services are already emerging to address information pollution and we only have a mere 6 billion devices so far. Imagine what will happen with the next 30 billion or so!

The Internet and the Great Data Deletion Debate

Posted on August 15th, 2013 by

Can your data, once uploaded publicly onto the Web, ever realistically be forgotten?  This was the debate I was having with a friend from the IAPP last night.  Much has been said about the EU’s proposals for a ‘right to be forgotten’ but, rather than arguing points of law, we were simply debating whether it is even possible to purge all copies of an individual’s data from the Web.

The answer, I think, is both yes and no: yes, it’s technically possible, and no, it’s very unlikely ever to happen.  Here’s why:

1. To purge all copies of an individual’s data from the Web, you’d need either (a) to know where all copies of those data exist on the Web, or (b) the data would need some kind of built-in ‘self-destruct’ mechanism so that it knows to purge itself after a set period of time.

2.  Solution (a) creates as many privacy issues as it solves.  You’d need either to create some kind of massive database tracking where all copies of data go on the Web or each copy of the data would need, somehow, to be ‘linked’ directly or indirectly to all other copies.  Even assuming it was technically feasible, it would have a chilling effect on freedom of speech – consider how likely a whistleblower would be to post content knowing that every content of that copy could be traced back to its original source.  In fact, how would anyone feel about posting content to the Internet knowing that every single subsequent copy could easily be traced back to their original post and, ultimately, back to them?

3.  That leaves solution (b).  It is wholly possible to create files with built in self-destruct mechanisms, but they would no longer be pure ‘data’ files.  Instead, they would be executable files – i.e. files that can be run as software on the systems on which they’re hosted.  But allowing executable data files to be imported and run on Web-connected IT systems creates huge security exposure – the potential for exploitation by viruses and malicious software would be enormous.  The other possibility would be that the data file contains a separate data field instructing the system on which it is hosted when to delete it – much like a cookie has an expiry date.  That would be fine for propietary data formats on closed IT systems, but is unlikely to catch on across existing, well-established and standardised data formats like .jpgs, .mpgs etc. across the global Web.  So the prospects for solution (b) catching on also appear slim.

What are the consequence of this?  If we can’t purge copies of the individuals’ data spread across the Internet, where does that leave us?  Likely the only realistic solution is to control the propogation of the data at source in the first place.  Achieving that is a combination of:

(a)  Awareness and education – informing individuals through privacy statements and contextual notices how their data may be shared, and educating them not to upload content they (or others) wouldn’t want to share;

(b)  Product design – utilising privacy impact assessments and privacy by design methodologies to assess product / service intrusiveness at the outset and then designing systems that don’t allow illegitimate data propogation; and

(c)  Regulation and sanctions – we need proportionate regulation backed by appropriate sanctions to incentivise realistic protections and discourage illegitimate data trading.  

No one doubts that privacy on the Internet is a challenge, and nowhere does it become more challenging than with the speedy and uncontrolled copying of data.   But let’s not focus on how we stop data once it’s ‘out there’ – however hard we try, that’s likely to remain an unrealistic goal.  Let’s focus instead on source-based controls – this is achievable and, ultimately, will best protect individuals and their data.

The true meaning of privacy (and why I became a privacy professional)

Posted on July 5th, 2013 by

Long before I became a privacy professional, I first graduated with a degree in computer science. At the time, like many graduates, I had little real notion of what it was I wanted to do with my life, so I took a couple of internships working as a database programmer. That was my first introduction to the world of data.

I quickly realized that I had little ambition to remain a career programmer, so I began to look at other professions. In my early twenties, and having the kind of idealistic tendencies commonplace in many young graduates, I decided I wanted to do something that mattered, something that would—in some way—benefit the world: I chose to become a lawyer.

Not, you might think, the most obvious choice given the (unfair) reputation that the legal profession tends to suffer. Nevertheless, I was attracted to a profession bound by an ethical code, that believed in principles like “innocent until proven guilty” and acting in the best interests of the client, and that took the time to explore and understand both sides to every argument. And, if I’m completely honest, I was also attracted by the unlimited access to truly wonderful stationery that a legal career would afford.

After brief stints as a trainee in real estate law, litigation and environmental law, I decided to pursue a career as a technology lawyer. After all, given my background, it seemed a natural fit, and having a technical understanding of the difference between things like megabytes and megabits, RAM and ROM and synchronous and asynchronous broadband gave me a head start over some of my peers.

On qualifying, I began picking up the odd bit of data protection work (Read: drafting privacy policies). Over time, I became a privacy “go to” person in the firms I worked at, not so much through any great talent on my part but simply because, at the time, I was among the small number of lawyers who knew anything about privacy and, for reasons I still don’t really understand, my colleagues considered data protection work a bewilderingly complex area of law, best left to those who “get” it—much like the way I felt about tax and antitrust law.

It’s not a career path I regret. I love advising on privacy issues because privacy speaks to all the idealized ethical notions I had when I first graduated. With privacy, I get to advise on matters that affect people, that concern right or wrong, that are guided by lofty ethical principles about respecting people’s fundamental rights. I run projects across scores of different countries, each with different legal regimes, regulators and cultural sensitivities. Intellectually, it is very challenging and satisfying.

Yet, at the same time, I have grown increasingly concerned about the dichotomy between the protections law and regulation see fit to mandate and what, in practice, actually delivers the best protection for people’s personal information. To my mind, far too much time is spent on filing registrations and carefully designing legal terms that satisfy legal obligations and create the impression of good compliance; far too little time is spent on privacy impact analyses, careful system design, robust vendor procurement processes and training and audit.

Lawyers, naturally enough, often think of privacy in terms of legal compliance, but any professional experienced in privacy will tell you that many legal obligations are counterintuitive or do little, in real terms, to protect people’s information. Take the EU’s binary controller/ processor regime, for example. Why do controllers bear all the compliance risk? Surely everyone who handles data has a role to play in its protection. Similarly, what good do local controller registrations do anyone?  They’re a costly, burdensome paperwork exercise that is seldom completed efficiently, accurately or—in many cases—even at all. And all those intra-group data sharing agreements—how much time do you spend negotiating their language with regional counsel rather than implementing measures to actually protect data?

Questions like these trouble me.  While the upcoming EU legal reform attempts to address several of these issues, many of its proposed changes to me seem likely to further exacerbate the problem. But for every critic of the reforms, there is an equally vocal proponent of them. So much so that reaching an agreed position between the European Council and Parliament—or even just within the Parliament—seems a near-insurmountable task.

Why is this reform so polarizing? It would be easy to characterize the division of opinions simply as being a split between regulators and industry, privacy advocates and industry lobbyists—indeed, many do. However, the answer is, I suspect, something more fundamental: namely, that we lack a common understanding of what “privacy” is and why it deserves protection.

As privacy professionals, we take for granted that “privacy” is something important and in need of protection. Yet privacy means different things to different people. To some, it means having the ability to sanction uses of our information before they happen; to others, it means being able to stop uses to which we object. Some focus on the inputs—should this data be collected?—others focus on the outputs: How is the data used? Some believe privacy is an absolute right that must not be compromised; others see privacy as a right that must be balanced against other considerations, such as national security, crime prevention and free speech.

If we’re going to protect privacy effectively, we need to better understand what it is we’re trying to protect and why it deserves protection. Further, we need to advocate this understanding and educate—and listen to—the very subjects of the data we’re trying to protect. Only if we have this shared societal understanding can we lay the foundations for a meaningful and enduring privacy regime. Without it, we’ll chase harms that do not exist and miss those that do.

My point is this: As a profession, we should debate and encourage an informed consensus about what privacy really is, and what it should be, in this digital age. That way, we stand a better chance of creating balanced and effective legal and regulatory frameworks that guard against the real risks to our data subjects. We’ll also better educate the next generation of eager young graduates entering our profession to understand what it is they are protecting and why. And this will benefit us all.

This post first appeared in the IAPP’s Privacy Perspectives blog, available here.

The country of origin principle: a controller’s establishment wish list

Posted on July 1st, 2013 by

Data controllers setting up shop in the Europe are typically well aware of the EU’s applicability of law rules under Art. 4 of the Data Protection Directive (95/46).  In particular that, by having an “establishment” in one Member State, they are subject only to the data protection law of that Member State – even when they process personal information about individuals in other Member States.  For example, a controller “established” in the UK is subject only to UK data protection law, even when it processes information about individuals resident in France, Germany, Spain, and elsewhere. 

Referred to as the “establishment” test, this model is particularly common among US online businesses selling into the EU.  Without an EU “establishment”, they risk exposure to each of the EU’s 28 different national data protection laws, with all the chaos that entails.  But with an EU “establishment”, they take the benefit of a single Member State’s law, driving down risk and promoting legal certainty.  This principle was most recently upheld when a German court concluded that Facebook is established in Ireland and therefore not subject to German data protection law.

What does it mean to have a data controlling “establishment” though?  It’s a complex question, and one for which the Article 29 Working Party has published detailed and technical guidance.  In purely practical terms though, there are a number of simple measures that controllers wanting to evidence their establishment in a particular Member State can take:

1.  Register as a data controller.  It may sound obvious, but controllers claiming establishment in a particular Member State should make sure to register with the national data protection authority in that Member State.  Aside from helping to show local establishment, failing to register may be an offence.

2.  Review your external privacy notices.  The business should ensure its privacy policy and other outward-facing privacy notices clearly identify the EU controller and where it is established.  It’s all very well designating a local EU subsidiary as a controller, but if the privacy policy tells a different story this will confuse data subjects and be a red flag to data protection authorities.

3.  Review your internal privacy policies.  A controller should have in place a robust internal policy framework evidencing its controllership and showing its commitment to protect personal data.  It should ensure that its staff are trained on those policies and that appropriate mechanisms exist to monitor and enforce compliance.  Failure to produce appropriate policy documentation will inevitably raise questions in the mind of a national data protection authority about the level of control the local entity has over data processing and compliance. 

4.  Data processing agreements.  It’s perfectly acceptable to outsource processing activities from the designated controller to affiliated group subsidiaries or external vendors, but controllers that do so must make sure to have in place appropriate agreements with their outsourced providers – within those providers are intra-group or external.  It’s vital that, through contractual controls, the designated controller remains in the driving seat about how and why its data is used; it mustn’t simply serve as a ‘rubber stamp’ for data decisions ultimately made by its parent or affiliates.  For example, if EU customer data is hosted on the CRM systems of a UK controller’s US parent, then arm’s length documentation should exist between the UK and US showing that the US processes data only as a processor on behalf of the UK.

5.  Appoint data protection staff.  In some territories, appointing a data protection officer is a mandatory legal requirement for controllers.  Even where it’s not, nominating a local employee to fulfill a data protection officer (or similar) role to oversee local data protection compliance is a sensible measure.  The nominated DPO will fulfill a critical role in reviewing and authorizing data processing policies, systems and activities, thus demonstrating that data decisions are made within the designated controller.  He or she will also provide a consistent and informed interface with the local data protection authority, fostering positive regulatory relationships.

This is not an exhaustive list by any means, but a controller that takes the above practical measures will go a long way towards evidencing “establishment” in its national territory.  This will benefit it not just when corresponding with its own national data protection authority but also when managing enquiries and investigations from overseas data protection authorities, by substantially reducing its exposure to the regimes of those overseas authorities in the first place.

A Brave New World Demands Brave New Thinking

Posted on June 3rd, 2013 by

Much has been said in the past few weeks and months about Google Glass, Google’s latest innovation that will see it shortly launch Internet-connected glasses with a small computer display in the corner of one lens that is visible to, and voice-controlled by, the wearer. The proposed launch capabilities of the device itself are—in pure computing terms—actually relatively modest: the ability to search the web, bring up maps, take photographs and video and share to social media.

So far, so iPhone.

But, because users wear and interact with Google Glass wherever they go, they will have a depth of relationship with their device that far exceeds any previous relationship between man and computer. Then throw in the likely short- to mid-term evolution of the device—augmented reality, facial recognition—and it becomes easy to see why Google Glass is so widely heralded as The Next Big Thing.

Of course, with an always-on, always-worn and always-connected, photo-snapping, video-recording, social media-sharing device, the privacy issues are a-plenty, ranging from the potential for crowd-sourced law enforcement surveillance to the more mundane forgetting-to-remove-Google-Glass-when-visiting-the-men’s-room scenario. These concerns have seen a very heated debate play out across the press, on TV and, of course, on blogs and social media.

But to focus the privacy debate just on Google Glass really misses the point. Google Glass is the headline-grabber, but in reality it’s just the tip of the iceberg when it comes to the wearable computing products that will increasingly be hitting the market over the coming years. Pens, watches, glasses (Baidu is launching its own smart glasses too), shoes, whatever else you care to think of—will soon all be Internet-connected. And it doesn’t stop at wearable computing either; think about Internet-connected home appliances: We can already get Internet-connected TVs, game consoles, radios, alarm clocks, energy meters, coffee machines, home safety cameras, baby alarms and cars. Follow this trend and, pretty soon, every home appliance and personal accessory will be Internet-connected.

All of these connected devices—this “Internet of Things”—collect an enormous volume of information about us, and in general, as consumers we want them: They simplify, organize and enhance our lives. But, as a privacy community, our instinct is to recoil at the idea of a growing pool of networked devices that collect more and more information about us, even if their purpose is ultimately to provide services we want.

The consequence of this tends to be a knee-jerk insistence on ever-strengthened consent requirements and standards: Surely the only way we can justify such a vast collection of personal information, used to build incredibly intricate profiles of our interests, relationships and behaviors, is to predicate collection on our explicit consent. That has to be right, doesn’t it?

The short answer to this is “no”—though not, as you might think, for the traditionally given reasons that users don’t like consent pop-ups or that difficulties arise when users refuse, condition or withdraw their consents. 

Instead, it’s simply that explicit consent is lazy. Sure, in some circumstances it may be warranted, but to look to explicit consent as some kind of data collection panacea will drive poor compliance that delivers little real protection for individuals.


Because when you build compliance around explicit consent notices, it’s inevitable that those notices will become longer, all-inclusive, heavily caveated and designed to guard against risk. Consent notices become seen as a legal issue, not a design issue, inhibiting the adoption of Privacy by Design development so that—rather than enhancing user transparency, they have the opposite effect. Instead, designers build products with little thought to privacy, safe in the knowledge that they can simply ‘bolt on’ a detailed consent notice as a ‘take it or leave it’ proposition on installation or first use, just like terms of service are now. And, as technology becomes ever more complicated, so it becomes ever more likely that consumers won’t really understand what it is they’re consenting to anyway, no matter how well it’s explained. It’s also a safe bet that users will simply ignore any notice that stands between them and the service they want to receive. If you don’t believe me, then look at cookie consent as a case in point.

Instead, it’s incumbent upon us as privacy professionals to think up a better solution. One that strikes a balance between the legitimate expectations of the individual with regard to his or her privacy and the legitimate interests of the business with regard to its need to collect and use data. One that enables the business to deliver innovative new products and services to consumers in a way that demonstrates respect for their data and engenders their trust and which does not result in lazy, consent-driven compliance. One that encourages controllers to build privacy functionality into their products from the very outset, not address it as an afterthought.

Maybe what we need is a concept of an online “personal space.”

In the physical world, whether through the rules of social etiquette, an individual’s body language or some other indicator, we implicitly understand that there is an invisible boundary we must respect when standing in close physical proximity to another person. A similar concept could be conceived for the online world—ironically, Big Data profiles could help here. Or maybe it’s as simple as promoting a concept of “surprise minimization” as proposed by the California attorney general in her guidance on mobile privacy—the concept that, through Privacy by Design methodologies, you avoid surprising individuals by collecting data from or about them that, in the given context, they would not expect or want.

Whatever the solution is, we’re entering a brave new world; it demands some brave new thinking.

This post first published on the IAPP Privacy Perspectives here.