Archive for the ‘Privacy by design’ Category

Beware: Europe’s take on the notification of personal data breaches to individuals

Posted on April 10th, 2014 by



Article 29 Working Party (“WP 29“) has recently issued an Opinion on Personal Data Breach Notification (the “Opinion“). The Opinion focuses on the interpretation of the criteria under which individuals should be notified about the breaches that affect their personal data.

Before we analyse the take aways from the Opinion, let’s take a step back: are controllers actually required to notify personal data breaches?

In Europe, controllers have, for a while now, been either legally required or otherwise advised to consider notifying personal data breaches to data protection regulators and/or subscribers or individuals.

Today, the only EU-wide personal data breach notification requirement derives from Directive 2002/58/EC, as amended by Directive 2009/136/EC, (the “e-Privacy Directive“) and  applies to providers of publicly available electronic communications services. In some EU member states (for example, in Germany), this requirement has been extended to controllers in other sectors or to all  controllers. Similarly, some data protection regulators have issued guidance whereby controllers are advised to report data breaches under certain circumstances.

Last summer, the European Commission adopted Regulation 611/2013 (the “Regulation“), (see our blog regarding the Regulation here), which  sets out the technical implementing measures concerning the circumstances, format and procedure for data breach notification required under Article 4 of the e-Privacy Directive.

In a nutshell, providers  must notify individuals of breaches that are likely to adversely affect their personal data or privacy without undue delay and taking account of: (i) the nature and content of the personal data concerned; (ii) the likely consequences of the personal data breach for the individual concerned (e.g. identify theft, fraud, distress, etc); and (iii) the circumstances of the personal data breach. Providers are exempt to notify individuals (not regulators) if they have demonstrated to the satisfaction of the data protection regulator that they have implemented appropriate technological protection measures to render that data unintelligible to any person who is not authorised to access it.

The Opinion provides guidance on how controllers may interpret this notification requirement by analysing 7 practical scenarios of breaches that will meet the ‘adverse effect’ test. For each of them, the  WP 29 identifies the potential consequences and adverse effects of the breach and the security safeguards which might have reduced the risk of the breach occurring in the first place or, indeed, might have exempted the controller from notifying the breach to individuals all together.

From the Opinion, it is worth highlighting:

The test. The ‘adverse effect’ test is interpreted broadly to include ‘secondary effects’. The  WP 29 clearly states that all the potential consequences and potential adverse effects are to be taken into account. This interpretation may be seen a step too far as not all ‘potential’ consequences are ‘likely’ to happen and will probably lead to a conservative interpretation of the notification requirement across Europe.

Security is key. Controllers should put in place security measures that are appropriate to the risk presented by the processing with emphasis on the implementation of those controls rendering data unintelligible. Compliance with data security requirements should result in the mitigation of the risks of personal data breaches and even, potentially, in the application of the exception to notify individuals about the breach. Examples of security measures that are identified to be likely to reduce the risk of a breach occurring are: encryption (with strong key); hashing (with strong key), back-ups, physical and logical access controls and regular monitoring of vulnerabilities.

Procedure. Controllers should have procedures in place to manage personal data breaches. This will involve a detailed analysis of the breach and its potential consequences. In the Opinion, the  data breaches fall under three categories, namely, availability, integrity or confidentiality breaches. The application of this model may help controllers analyse the breach too.

How many individuals? The number of individuals affected by the breach should not have a bearing on the decision of whether or not to notify them.

Who must notify? It is explicitly stated in the Opinion that breach notification constitutes good practice for all controllers, even for those who are currently not required to notify by law.

There is a growing consensus in Europe that it is only a matter of time before an EU-wide personal data breach notification requirement that applies to all controllers (regardless of the sector they are in) is in place. Indeed, this will be the case if/when the proposed General Data Protection Regulation is approved. Under it, controllers would be subject to strict notification requirements both to data protection regulators and individuals. This Opinion provides some insight into  how the  European regulators may interpret these requirements under the General Data Protection Regulation.

Therefore, controllers will be well-advised to prepare for what is coming their way (see previous blog here). Focus should be on the application of security measures (in order to prevent a breach and the adverse effects to individuals once a breach has occurred) and on putting procedures in place to effectively manage breaches. Start today, burying the head in the sand is just no longer an option.

Information Pollution and the Internet of Things

Posted on September 8th, 2013 by



Kevin Ashton, the man credited with coining the term “The Internet of Things” once said: “The Internet of Things has the potential to change the world, just as the Internet did. Maybe even more so.

This couldn’t be more true. The range of potential applications for the Internet of Things, from consumer electronics to energy efficiency and from supply chain management to traffic safety, is breathtaking. Today, there are 6 billion or so connected devices on the planet. By 2020, some estimate that figure will be in the range of 30 to 50 billion. Applying some very basic maths, That’s between 4 and 7 internet-connected “things” per person.

All this, of course, means vast levels of automated data generation, processing and sharing. Forget Big Data: we’re talking mind-blowingly Huge Data. That presents numerous challenges to traditional notions of privacy, and issues of applicability of law, transparency, choice and security have been (and will continue to be) debated at length.

One area that deserves particular attention is how we deal with data access in an everything-connected world. There’s a general notion in privacy that individuals should have a right to access their information – indeed, this right is hard-coded into EU law. But when so much information is collected – and across so many devices – how can we provide individuals with meaningful access to information in a way that is not totally overwhelming?

Consider a world where your car, your thermostat, your DVR, your phone, your security system, your portable health device, and your fridge are all trying to communicate information to you on a 24 x 7 x 365 basis: “This road’s busy, take that one instead”, “Why not lower your temperature by two degrees”, “That program you recorded is ready to watch”, “You forgot to take your medication today” and so on.

The problem will be one of information pollution: there will be just too much information available. How do you stop individuals feeling completely overwhelmed by this? The truth is that no matter how much we, as a privacy community, try to preserve rights for individuals to access as much data as possible, most will never explore their data beyond a very cursory, superficial level. We simply don’t have the energy or time.

So how do we deal with this challenge? The answer is to abstract away from the detail of the data and make readily available to individuals only the information they want to see, when they want to see it. Very few people want a level of detail typically of interest only to IT forensics experts in complex fraud cases – like what IP addresses they used to access a service or the version number of the software on their device. They want, instead, to have access to information that holds meaning for them, presented in a real, tangible and easy to digest way. For want of a better descriptor, the information needs to be presented in a way that is “accessible”.

This means information innovation will be the next big thing: maybe we’ll see innovators create consumer-facing dashboards that collect, sift and simplify vast amounts of information across their many connected devices, perhaps using behavioural, geolocation and spatial profiling techniques to tell consumers the information that matters to them at that point in time.

And if this all sounds a little too far-fetched, then check out services like Google Now and TripIt, to name just a couple. Services are already emerging to address information pollution and we only have a mere 6 billion devices so far. Imagine what will happen with the next 30 billion or so!

The Internet and the Great Data Deletion Debate

Posted on August 15th, 2013 by



Can your data, once uploaded publicly onto the Web, ever realistically be forgotten?  This was the debate I was having with a friend from the IAPP last night.  Much has been said about the EU’s proposals for a ‘right to be forgotten’ but, rather than arguing points of law, we were simply debating whether it is even possible to purge all copies of an individual’s data from the Web.

The answer, I think, is both yes and no: yes, it’s technically possible, and no, it’s very unlikely ever to happen.  Here’s why:

1. To purge all copies of an individual’s data from the Web, you’d need either (a) to know where all copies of those data exist on the Web, or (b) the data would need some kind of built-in ‘self-destruct’ mechanism so that it knows to purge itself after a set period of time.

2.  Solution (a) creates as many privacy issues as it solves.  You’d need either to create some kind of massive database tracking where all copies of data go on the Web or each copy of the data would need, somehow, to be ‘linked’ directly or indirectly to all other copies.  Even assuming it was technically feasible, it would have a chilling effect on freedom of speech – consider how likely a whistleblower would be to post content knowing that every content of that copy could be traced back to its original source.  In fact, how would anyone feel about posting content to the Internet knowing that every single subsequent copy could easily be traced back to their original post and, ultimately, back to them?

3.  That leaves solution (b).  It is wholly possible to create files with built in self-destruct mechanisms, but they would no longer be pure ‘data’ files.  Instead, they would be executable files – i.e. files that can be run as software on the systems on which they’re hosted.  But allowing executable data files to be imported and run on Web-connected IT systems creates huge security exposure – the potential for exploitation by viruses and malicious software would be enormous.  The other possibility would be that the data file contains a separate data field instructing the system on which it is hosted when to delete it – much like a cookie has an expiry date.  That would be fine for propietary data formats on closed IT systems, but is unlikely to catch on across existing, well-established and standardised data formats like .jpgs, .mpgs etc. across the global Web.  So the prospects for solution (b) catching on also appear slim.

What are the consequence of this?  If we can’t purge copies of the individuals’ data spread across the Internet, where does that leave us?  Likely the only realistic solution is to control the propogation of the data at source in the first place.  Achieving that is a combination of:

(a)  Awareness and education – informing individuals through privacy statements and contextual notices how their data may be shared, and educating them not to upload content they (or others) wouldn’t want to share;

(b)  Product design – utilising privacy impact assessments and privacy by design methodologies to assess product / service intrusiveness at the outset and then designing systems that don’t allow illegitimate data propogation; and

(c)  Regulation and sanctions – we need proportionate regulation backed by appropriate sanctions to incentivise realistic protections and discourage illegitimate data trading.  

No one doubts that privacy on the Internet is a challenge, and nowhere does it become more challenging than with the speedy and uncontrolled copying of data.   But let’s not focus on how we stop data once it’s ‘out there’ – however hard we try, that’s likely to remain an unrealistic goal.  Let’s focus instead on source-based controls – this is achievable and, ultimately, will best protect individuals and their data.

ICO’s draft code on Privacy Impact Assessments

Posted on August 8th, 2013 by



This week the Information Commissioner’s Office (‘ICO’) announced a consultation on its draft Conducting Privacy Impact Assessments Code of Practice (the ‘draft code’). The draft code and the consultation document are available at http://www.ico.org.uk/about_us/consultations/our_consultations  and the deadline for responding is 5 November 2013.

When it comes into force, the new code of practice will set out ICO’s expectations on the conduct of Privacy Impact Assessments (‘PIAs’) and will replace ICO’s current PIA Handbook. So why is the draft code important and how does it differ from the PIA Handbook?

  • PIAs are a valuable risk management instrument that can function as an early warning system while, at the same time, promoting better privacy and substantive accountability. Although there is at present no statutory requirement to carry out PIAs, ICO expects them.
  • For instance, in the context of carrying out audits, ICO has criticised controllers who had not rolled out a framework for carrying out PIAs. More importantly, the absence or presence of a risk assessment is a determinative factor in ICO’s decision making to take enforcement action or not. When ICO talks about the absence or presence of a risk assessment, it means the conduct of some form of PIA.
  • Impact assessments are likely to soon become a mandatory statutory requirement across the EU, as the current version of the draft EU Data Protection Regulation requires ‘Data Protection Impact Assessments’. Note, however, that the DPIAs mandated by article 33 of the Draft Regulation have a narrower scope than PIAs.  The former focus on ‘data protection risks’ as opposed to ‘privacy risks’, which is a broader concept that in addition to data protection encompasses broader notions of privacy such as privacy of personal behaviour or privacy of personal communications.
  • The fact that ICO’s guidance on PIAs will now take the form of a statutory Code of Practice (as opposed to a ‘Handbook’) means that it will have increased evidentiary significance in legal proceedings before courts and tribunals on questions relevant to the conduct of PIAs.

The PIA Handbook is generally too cumbersome and convoluted. The aim of the draft code is to simplify the current guidance and promote practical PIAs that are less time consuming and complex, and as flexible as possible in order to be adapted to an organisation’s existing project and risk management processes.  However, on an initial review of the draft code I am not convinced that it achieves the optimum results in this regard.  Consider for example the following expectations set out in the draft code which did not appear in the PIA Handbook:

  • In addition to internal stakeholders, organisations should work with partner organisations and with the public. In other words, ICO encourages controllers to test their PIA analysis with the individuals who will be affected by the project that is being assessed.
  • Conducting and publicising the PIA will help build trust with the individuals using the organisation’s services. In other words, ICO expects that PIAs will be published in certain circumstances.
  • PIAs should incorporate 7 distinct steps and the draft code provides templates for questionnaires and reports, as well as guidance on how to integrate the PIA with project and risk management processes.

Overall, although the draft code is certainly an improvement compared to the PIA Handbook, it remains cumbersome and prescriptive.  It also places a lot of emphasis on documentation, recording decisions and record keeping.  In addition, the guidance and some of the templates include privacy jargon that is unlikely to be understood by staff who are not privacy experts, such as project managers or work-stream leads who are most likely to be asked to populate the PIA documentation in practice.

Many organisations are likely to want a simpler, more streamlined and more efficient PIA process with fewer steps, simpler tools / documents and clearer guidance, and which incorporates legal requirements and ICO’s essential expectations without undully delaying the launch of new processing operations. Such orgaisations are also likely to want to make their voice heard in the context of ICO’s consultation on the draft code.

Use of technology in the workplace: what are the risks?

Posted on July 29th, 2013 by



Technology is omnipresent in the workplace. In recent years, the use of technological tools and equipment by companies has grown exponentially. Various types of electronic equipment, which were previously reserved for military or scientific facilities (such as computers, smartphones, CCTV cameras, GPS systems, or biometric devices) are now commonly used by many private companies and are easy and cheap to install.

Technology undoubtedly provides companies with new opportunities for improving work performance and increasing security on their premises. At the same time, employees’ personal data are more regularly collected and potential threats to their privacy are more commonplace. In some circumstances, the use of advanced technology can pose higher security threats, which outweigh the benefits the technology provides (see our previous blog post on the risks of BYOD).

In Europe, the use of technology in the workplace will almost certainly trigger the application of privacy and labour laws aimed at safeguarding the employees’ right to privacy. In this context, data protection authorities are particularly attentive to the risks to employees that can derive from the use of technology in the workplace and their potential intrusiveness. Earlier this year, the French Data Protection Authority (“CNIL”) reported that 15% of all complaints received were work-related (see our previous blog post on the CNIL’s report). In many cases, employees felt threatened by the invasiveness of video cameras (and other technologies) being used in the work place. For that reason, the CNIL published several practical guidelines instructing employers on how to use technology in the workplace in accordance with the French Data Protection Act and the French rules on privacy.

Employers are faced with the challenge of finding a way to use technology without falling foul of privacy laws. When considering whether to implement a particular technology in the workplace, companies should take appropriate measures to ensure that those technologies are implemented in accordance with applicable privacy and labour laws. As a first step, it is often good practice to carry out a privacy impact assessment that will allow the company to identify any potential threats to employees and the risk of the company breaching privacy and labour laws. Also, the general data protection principles (lawfulness and fairness of processing, purpose limitation, proportionality, transparency, security and confidentiality) should be fully integrated into the decision-making process and privacy-by-design should be an integral part of any new technology that is deployed within the company. In particular, companies should ensure that employees are properly informed, both individually and collectively, prior to the collection of their personal data. Additionally, in many EU jurisdictions, it is often necessary for companies to inform and/or consult the employee representative bodies (such as a Works Council) on such issues. Finally, companies must grant employees access to their personal data in accordance with applicable local laws.

So, are technology and privacy incompatible? Not necessarily. Under European law, there is no general prohibition to use technology in the workplace. However, as is often the case under privacy law, the critical point is to find a fair balance between the organisation’s goals and purposes when using a particular technology and the employees’ privacy rights.

Click here to access my article on the use of technology in the workplace under French privacy law.

The true meaning of privacy (and why I became a privacy professional)

Posted on July 5th, 2013 by



Long before I became a privacy professional, I first graduated with a degree in computer science. At the time, like many graduates, I had little real notion of what it was I wanted to do with my life, so I took a couple of internships working as a database programmer. That was my first introduction to the world of data.

I quickly realized that I had little ambition to remain a career programmer, so I began to look at other professions. In my early twenties, and having the kind of idealistic tendencies commonplace in many young graduates, I decided I wanted to do something that mattered, something that would—in some way—benefit the world: I chose to become a lawyer.

Not, you might think, the most obvious choice given the (unfair) reputation that the legal profession tends to suffer. Nevertheless, I was attracted to a profession bound by an ethical code, that believed in principles like “innocent until proven guilty” and acting in the best interests of the client, and that took the time to explore and understand both sides to every argument. And, if I’m completely honest, I was also attracted by the unlimited access to truly wonderful stationery that a legal career would afford.

After brief stints as a trainee in real estate law, litigation and environmental law, I decided to pursue a career as a technology lawyer. After all, given my background, it seemed a natural fit, and having a technical understanding of the difference between things like megabytes and megabits, RAM and ROM and synchronous and asynchronous broadband gave me a head start over some of my peers.

On qualifying, I began picking up the odd bit of data protection work (Read: drafting privacy policies). Over time, I became a privacy “go to” person in the firms I worked at, not so much through any great talent on my part but simply because, at the time, I was among the small number of lawyers who knew anything about privacy and, for reasons I still don’t really understand, my colleagues considered data protection work a bewilderingly complex area of law, best left to those who “get” it—much like the way I felt about tax and antitrust law.

It’s not a career path I regret. I love advising on privacy issues because privacy speaks to all the idealized ethical notions I had when I first graduated. With privacy, I get to advise on matters that affect people, that concern right or wrong, that are guided by lofty ethical principles about respecting people’s fundamental rights. I run projects across scores of different countries, each with different legal regimes, regulators and cultural sensitivities. Intellectually, it is very challenging and satisfying.

Yet, at the same time, I have grown increasingly concerned about the dichotomy between the protections law and regulation see fit to mandate and what, in practice, actually delivers the best protection for people’s personal information. To my mind, far too much time is spent on filing registrations and carefully designing legal terms that satisfy legal obligations and create the impression of good compliance; far too little time is spent on privacy impact analyses, careful system design, robust vendor procurement processes and training and audit.

Lawyers, naturally enough, often think of privacy in terms of legal compliance, but any professional experienced in privacy will tell you that many legal obligations are counterintuitive or do little, in real terms, to protect people’s information. Take the EU’s binary controller/ processor regime, for example. Why do controllers bear all the compliance risk? Surely everyone who handles data has a role to play in its protection. Similarly, what good do local controller registrations do anyone?  They’re a costly, burdensome paperwork exercise that is seldom completed efficiently, accurately or—in many cases—even at all. And all those intra-group data sharing agreements—how much time do you spend negotiating their language with regional counsel rather than implementing measures to actually protect data?

Questions like these trouble me.  While the upcoming EU legal reform attempts to address several of these issues, many of its proposed changes to me seem likely to further exacerbate the problem. But for every critic of the reforms, there is an equally vocal proponent of them. So much so that reaching an agreed position between the European Council and Parliament—or even just within the Parliament—seems a near-insurmountable task.

Why is this reform so polarizing? It would be easy to characterize the division of opinions simply as being a split between regulators and industry, privacy advocates and industry lobbyists—indeed, many do. However, the answer is, I suspect, something more fundamental: namely, that we lack a common understanding of what “privacy” is and why it deserves protection.

As privacy professionals, we take for granted that “privacy” is something important and in need of protection. Yet privacy means different things to different people. To some, it means having the ability to sanction uses of our information before they happen; to others, it means being able to stop uses to which we object. Some focus on the inputs—should this data be collected?—others focus on the outputs: How is the data used? Some believe privacy is an absolute right that must not be compromised; others see privacy as a right that must be balanced against other considerations, such as national security, crime prevention and free speech.

If we’re going to protect privacy effectively, we need to better understand what it is we’re trying to protect and why it deserves protection. Further, we need to advocate this understanding and educate—and listen to—the very subjects of the data we’re trying to protect. Only if we have this shared societal understanding can we lay the foundations for a meaningful and enduring privacy regime. Without it, we’ll chase harms that do not exist and miss those that do.

My point is this: As a profession, we should debate and encourage an informed consensus about what privacy really is, and what it should be, in this digital age. That way, we stand a better chance of creating balanced and effective legal and regulatory frameworks that guard against the real risks to our data subjects. We’ll also better educate the next generation of eager young graduates entering our profession to understand what it is they are protecting and why. And this will benefit us all.

This post first appeared in the IAPP’s Privacy Perspectives blog, available here.

In defence of the privacy policy

Posted on March 29th, 2013 by



Speaking at the Games Developers’ Conference in San Francisco yesterday on the panel “Privacy by [Game] Design”, I was thrown an interesting question: Does the privacy policy have any place in the forward-thinking privacy era?

To be sure, privacy policy bashing has become populist stuff in recent years, and the role of the privacy policy is a topic I’ve heard debated many, many times. The normal conclusion to any discussion around this point is that privacy policies are too long, too complex and simply too unengaging for any individual to want to read them. Originally intended as a fair processing disclosure about what businesses do with individuals’ data, critics complain that they have over time become excessively lengthy, defensive, legalistic documents aimed purely to protect businesses from liability. Just-in-time notices, contextual notices, privacy icons, traffic lights, nutrition labels and gamification are the way forward. See, for example, this recent post by Peter Fleischer, Google’s Global Privacy Counsel.

This is all fair criticism. But that doesn’t mean it’s time to write-off privacy policies – we’re not talking an either/or situation here. They continue to serve an important role in ensuring organisational accountability. Committing a business to put down, in a single, documented place, precisely what data it collects, what it does with that data, who it shares it with, and what rights individuals have, helps keep it honest. More and more, I find that clients put considerable effort into getting their privacy policies right, carefully checking that the disclosures they make actually map to what they do with data – stimulating conversations with other business stakeholders across product development, marketing, analytics and customer relations functions. The days when lawyers were told “just draft something” are long gone, at least in my experience.

This internal dialogue keeps interested stakeholders informed about one another’s data uses and facilitates discussions about good practice that might otherwise be overlooked. If you’re going to disclose what you do in an all-encompassing, public-facing document – one that may, at some point, be scoured over by disgruntled customers, journalists, lawyers and regulators – then you want to make sure that what you do is legit in the first place. And, of course, while individuals seldom ever read privacy policies in practice, if they do have a question or a complaint they want to raise, then a well-crafted privacy policy serves (or, at least, should serve) as a comprehensive resource for finding the information they need.

Is a privacy policy the only way to communicate with your consumers what you do with their data? No, of course not. Is it the best way? Absolutely not: in an age of device and platform fragmentation, the most meaningful way is through creative Privacy by Design processes that build a compelling privacy narrative into your products and services. But is the privacy policy still relevant and important? Yes, and long may this remain the case.

Designing privacy for mobile apps

Posted on March 16th, 2013 by



My phone is my best friend.  I carry it everywhere with me, and entrust it with vast amounts of my personal information, for the most part with little idea about who has access to that information, what they use it for, or where it goes.  And what’s more, I’m not alone.  There are some 6 billion mobile phone subscribers out there, and I’m willing to bet that most – if not all of them – are every bit as unaware of their mobile data uses as me.

So it’s hardly surprising that the Article 29 Working Party has weighed in on the issue with an “opinion on apps on smart devices” (available here).  The Working Party splits its recommendations across the four key players in the mobile ecosystem (app developers, OS and device manufacturers, app stores and third parties such as ad networks and analytics providers), with app developers receiving the bulk of the attention.

Working Party recommendations

Much of the Working Party’s recommendations don’t come as a great surprise: provide mobile users with meaningful transparency, avoid data usage creep (data collected for one purpose shouldn’t be used for other purposes), minimise the data collected, and provide robust security.  But other recommendations will raise eyebrows, including that:

(*)  the Working Party doesn’t meaningfully distinguish between the roles of an app publisher and an app developer – mostly treating them as one and the same.  So, the ten man design agency engaged by Global Brand plc to build it a whizzy new mobile app is effectively treated as having the same compliance responsibilities as Global Brand, even though it will ultimately be Global Brand who publicly releases the app and exploits the data collected through it;

(*)  the Working Party considers EU data protection law to apply whenever a data collecting app is released into the European market, regardless of where the app developer itself is located globally.  So developers who are based outside of Europe but who enjoy global release of their app on Apple’s App Store or Google Play may unwittingly find themselves subjected to EU data protection requirements;

(*)  the Working Party takes the view that device identifiers like UDID, IMEI and IMSI numbers all qualify as personal data, and so should be afforded the full protection of European data protection law.  This has a particular impact on the mobile ad industry, who typically collect these numbers for ad serving and ad tracking purposes, but aim to mitigate regulatory exposure by carefully avoiding collection of “real world” identifiers;

(*)  the Working Party places a heavy emphasis on the need for user opt-in consent, and does not address situations where the very nature of the app may make it so obvious to the user what information the app will collect as to make consent unnecessary (or implied through user download); and

(*)  the Working Party does not address the issue of data exports.  Most apps are powered by cloud-based functionality and supported by global service providers meaning that, perhaps more than in any other context, the shortfalls of common data export solutions like model clauses and safe harbor become very apparent.

Designing for privacy
Mobile privacy is hard.  In her guidance on mobile apps, the California Attorney-General rightly acknowledged that: “Protecting consumer privacy is a team sport. The decisions and actions of many players, operating individually and jointly, determine privacy outcomes for users. Hardware manufacturers, operating system developers, mobile telecommunications carriers, advertising networks, and mobile app developers all play a part, and their collaboration is crucial to enabling consumers to enjoy mobile apps without having to sacrifice their privacy.
Building mobile apps that are truly privacy compliant requires a privacy by design approach from the outset.  But, for any mobile app build, there are some top tips that developers should be aware of:
  1. Always, always have a privacy policy.  The poor privacy policy has been much maligned in recent years but, whether or not it’s the best way to tell people what you do with their information (it’s not), it still remains an expected standard.  App developers need to make sure they have a privacy policy that accurately reflects how they will use and protect individuals’ personal information and make this available both prior to download (e.g. published on the app store download page) and in-app.  Not having this is a sure fire way to fall foul of privacy authorities – as evidenced in the ongoing Delta Airlines case.
  2. Surprise minimisation.  The Working Party emphasises the need for user consents and, in certain contexts, consent will of course be appropriate (e.g. when accessing real-time GPS data).  But, to my mind, the better standard is that proposed by the California Attorney-General of “surprise minimisation”, which she explains as the use of “enhanced measures to alert users and give them control over data practices that are not related to an app’s basic functionality or that involve sensitive information.” Just-in-time privacy notices combined with meaningful user controls are the way forward.
  3. Release “free” and “premium” versions.  The Working Party says that individuals must have real choice over whether or not apps collect personal information about them.  However, developers will commonly complain that real choice simply isn’t an option – if they’re going to provide an app for free, then they need to collect and monitise data through it (e.g. through in-app targeted advertising).  An obvious solution is to release two versions of the app – one for “free” that is funded by exploiting user data and one that is paid for, but which only collects user data necessary to operate the app.  That way, users that don’t want to have their data monitised can choose to download the paid for “premium” version instead – in other words, they have choice;
  4. Provide privacy menu settings.   It’s suprising how relatively few apps offer this, but privacy settings should be built into app menus as a matter of course – for example, offering users the ability to delete app usage histories, turn off social networking integration, restrict location data use etc.  Empowered users are happy users, and happy users means happy regulators; and
  5. Know Your Service Providers.  Apps serve as a gateway to user data for a wide variety of mobile ecosystem operators – and any one of those operators might, potentially, misuse the data it accesses.  Developers need to be particularly careful when integrating third party APIs into their apps, making sure that they properly understand their service providers’ data practices.  Failure to do proper due diligence will leave the developer exposed.

Any developer will tell you that you don’t build great products by designing to achieve compliance; instead, you build great products by designing a great user experience.  Fortunately, in privacy, both goals are aligned.  A great privacy experience is necessarily part and parcel of a great user experience, and developers need to address users’ privacy needs at the earliest stages of development, through to release and beyond.

European Parliament’s take on the Regulation: Stricter, thicker and tougher

Posted on January 9th, 2013 by



 

If anyone thought that the European Commission’s draft Data Protection Regulation was prescriptive and ambitious, then prepare yourselves for the European Parliament’s approach. The much awaited draft report by the LIBE Committee with its revised proposal (as prepared by its rapporteur Jan-Philipp Albrecht) has now been made available and what was already a very complex piece of draft legislation has become by far the strictest, most wide ranging and potentially most difficult to navigate data protection law ever to be proposed.

This is by no means the end of the legislative process, but here are some of the highlights of the European Parliament’s proposal currently on the table:

*     The territorial scope of application to non EU-based controllers has been expanded, in order to catch those collecting data of EU residents with the aim of (a) offering goods or services (even if they are free) or (b) monitoring those individuals (not just their behaviour).

*     The concept of ‘personal data’ has also been expanded to cover information relating to someone who can be singled out (not just identified).

*     The Parliament has chosen to give an even bigger role to ‘consent’ (which must still be explicit), since this is regarded as the best way for individuals to control the uses made of their data. In turn, relying on the so-called ‘legitimate interests’ ground to process personal data has become much more onerous, as controllers must then inform individuals about such specific processing and the reasons why those legitimate interests override the interests or fundamental rights and freedoms of the individual.

*     Individuals’ rights have been massively strengthened across the board. For example, the right of access has been expanded by adding to it a ‘right to data portability’ and the controversial ‘right to be forgotten’ potentially goes even further than originally drafted, whilst profiling activities are severely restricted.

*     All of the so-called ‘accountability’ measures imposed on data controllers are either maintained or reinforced. For example, the obligation to appoint a data protection officer will kick in when personal data relating to 500 or more individuals is processed per year, and new principles such as data protection by design and by default are now set to apply to data processors as well.

*     The ‘one stop shop’ concept that made a single authority competent in respect of a controller operating across Member States has been considerably diluted, as the lead authority is now restricted to just acting as a single contact point.

*     Many of the areas that had been left for the Commission to deal with via ‘delegated acts’ are now either specifically covered by the Regulation itself (hence becoming more detailed and prescriptive) or left for the proposed European Data Protection Board to specify, therefore indirectly giving a legislative power to the national data protection authorities.

*     An area of surprising dogmatism is international data transfers, where the Parliament has added further conditions to the criteria for adequacy findings, placed a time limit of 2 years to previously granted adequacy decisions or authorisations for specific transfers (it’s not clear what happens afterwards – is Safe Harbor at risk?), reinforced slightly the criteria for BCR authorisations, and limited transfers to non-EU public authorities and courts.

*     Finally, with regard to monetary fines, whilst the Parliament gives data protection authorities more discretion to impose sanctions, more instances of possible breaches have been added to the most severe categories of fines.

All in all, the LIBE Committee’s draft proposal represents a significant toughening of the Commission’s draft (which was already significantly tougher than the existing data protection directive). Once it is agreed by the Parliament, heated negotiations with the Council of the EU and other stakeholders (including the Commission itself) will then follow and we have just over a year to get the balance right. Much work no doubt awaits.

 

2013 to be the year of mobile regulation?

Posted on January 4th, 2013 by



After a jolly festive period (considerably warmer, I’m led to understand, for me in Palo Alto than for my colleagues in the UK), the New Year is upon us and privacy professionals everywhere will no doubt be turning their minds to what 2013 has in store for them.  Certainly, there’s plenty of developments to keep abreast of, ranging from the ongoing EU regulatory reform process through to the recent formal recognition of Binding Corporate Rules for processors.  My partner, Eduardo Ustaran, has posted an excellent blog outlining his predictions here.

But one safe bet for greater regulatory attention this year is mobile apps and platforms.  Indeed, with all the excitement surrounding cookie consent and EU regulatory reform, mobile has remained largely overlooked by EU data protection authorities to date.  Sure, we’ve had the Article 29 Working Party opine on geolocation services and on facial recognition in mobile services.  The Norwegian Data Protection Inspectorate even published a report on mobile apps in 2011 (“What does your app know about you?“).  But really, that’s been about it.  Pretty uninspiring, not to mention surprising, when consumers are fast abandoning their creaky old desktop machines and accessing online services through shiny new smartphones and tablets: Forbes even reports that mobile access now accounts for 43% of total minutes spent on Facebook by its users.

Migration from traditional computing platforms to mobile computing is not, in and of itself, enough to guarantee regulator interest.  But there are plenty of other reasons to believe that mobile apps and platforms will come under increased scrutiny this year:

1.  First, meaningful regulatory guidance is long overdue.  Mobiles are inherently more privacy invasive than any other computing platform.  We entrust more data to our mobile devices (in my case, my photos, address books, social networking, banking and shopping account details, geolocation patterns, and private correspondence) than any other platform and generally with far less security – that 4 digit PIN really doesn’t pass muster.  We download apps from third parties we’ve often scarcely ever heard of, with no idea as to what information they’re going to collect or how they’re going to use it, and grant them all manner of permissions without even thinking – why, exactly, does that flashlight app need to know details of my real-time location?  Yet despite the huge potential for privacy invasion, there persists a broad lack of understanding as to who is accountable for compliance failures (the app store, the platform provider, the network provider or the app developer) and what measures they should be implementing to avoid privacy breaches in the first place.  This uncertainty and confusion makes regulatory involvement inevitable.

2.  Second, regulators are already beginning to get active in the mobile space – if this were not the case, the point above would otherwise be pure speculation.  It’s not, though.  On my side of the Pond, we’ve recently seen the California Attorney General file suit against Delta Air Lines for its failure to include a privacy policy within its mobile app (this action itself following letters sent by the AG to multiple app providers warning them to get their acts together).  Then, a few days later, the FTC launched a report on children’s data collection through mobile apps, in which it indicated that it was launching multiple investigations into potential violations of the Children’s Online Privacy Protection Act (COPPA) and the FTC Act’s unfair and deceptive practices regime.  The writing is on the wall, and it’s likely EU regulators will begin following the FTC’s lead.

3.  Third, the Article 29 Working Party intends to do just that.  In a press release in October, the Working Party announced that “Considering the rapid increase in the use of smartphones, the amount of downloaded apps worldwide and the existence of many small-sized app-developers, the Working Party… [will] publish guidance on mobile apps… early next year.” So guidance is coming and, bearing in mind that the Article 29 Working Party is made up of representatives from national EU data protection authorities, it’s safe to say that mobile privacy is riding high on the EU regulatory agenda.

In 2010, the Wall Street Journal reported: “An examination of 101 popular smartphone “apps”—games and other software applications for iPhone and Android phones—showed that 56 transmitted the phone’s unique device ID to other companies without users’ awareness or consent. Forty-seven apps transmitted the phone’s location in some way. Five sent age, gender and other personal details to outsiders… Many apps don’t offer even a basic form of consumer protection: written privacy policies. Forty-five of the 101 apps didn’t provide privacy policies on their websites or inside the apps at the time of testing.“  Since then, there hasn’t been a great deal of improvement.  My money’s on 2013 being the year that this will change.