Creating a successful data retention policy

Posted on April 22nd, 2014 by

With the excitement generated by the recent news that the European Court of Justice has, in effect, struck down the EU’s Data Retention Directive (see our earlier post here), now seems as a good a time as any to re-visit the topic of data retention generally.

Whereas the Data Retention Directive required ISPs and telcos to hold onto communications metadata, the Data Protection Directive is sector-blind and pulls in exactly the opposite direction: put another way, it requires all businesses not to hold onto personal data for longer than is “necessary”.

That’s the kind of thing that’s easy for a lawyer to say, but difficult to implement in practice.  How do you know if it’s “necessary” to continue holding data?  How long does “necessary” last?  How do you explain to internal business stakeholders that what they consider “necessary” (i.e. commercially desirable) is not the same thing as what the law considers “necessary”?

Getting the business on-side

For any CPO, compliance officer or in-house lawyer looking to create their company’s data retention policy, you’ll need to get the business on-side.  Suggesting to the business that it deletes valuable company data after set periods of time may not initially be well-received but, for your policy to be a success, you’ll ultimately need the business’s support.

To get this buy-in, you need to communicate the advantages of a data retention policy and, fortunately, these are numerous.  Consider, for example:

  • Reduced IT expenditure:  By deleting data at defined intervals, you reduce the overall amount of data you’ll be storing.  That in turn means you need fewer systems to host that data, less archiving, back-ups and offsite storage, making significant cost savings and keeping your CFO happy.
  • Improved security:  It seems obvious, but it’s amazing how often this is overlooked.  The less you hold, the less – frankly – you have to lose.  Nobody wants to be making a data breach notification to a regulator AND explaining why they were continuing to hold on to 20 year old records in the first place.
  • Minimised data disclosures:  Most businesses are familiar with the rights individuals have to request access to their personal information, as well as the attendant business disruption these requests can cause.  As with the above point, the less data you hold, the less you’ll need to disclose in response to one of these requests (meaning the less effort – and resource – you need to put into finding that data).  This holds true for litigation disclosure requests too.
  • Legal compliance:  Last, but by no means least, you need a data retention policy for legal compliance – after all, it’s the law not to hold data for longer than “necessary”.  Imagine a DPA contacting you and asking for details of your data retention policy.  It would be a bad place to be in if you didn’t have something ready to hand over.  

Key considerations

Once you have persuaded the business that creating a data retention policy is a good idea, the next task is then to go off and design one!  This will involve input from various internal stakeholders (particularly IT staff) so it’s important you approach them with a clear vision for how to address some of the critical retention issues.

Among the important points to consider are:

  • Scope of the policy:  What data is in-scope?  Are you creating a data retention policy just for, say, HR data or across all data processed by the business?  There’s a natural tension here between achieving full compliance and keeping the project manageable (i.e. not biting off more than you can chew).  It may be easier to “prove” that your policy works on just one dataset first and then roll it out to additional, wider datasets later.
  • One-size-fits-all vs. country-by-country approach:  Do you create a policy setting one-size-fits-all retention limits across all EU (possibly worldwide) geographies, or set nationally-driven limits with the result that records kept for, say, 6 years in one country must be deleted after just two in another?  Again, the balance to be struck here is between one of compliance and risk versus practicality and ease of administration.
  • Records retention vs. data retention:  Will your policy operate at the “record” level or the “data” level?  The difference is this: a record (such as a record of a customer transaction) may comprise multiple data elements (e.g. name, cardholder number, item purchased, date etc.)  A crucial decision then is whether your policy should operate at the “record” level (so that the entire customer transaction record is deleted after [x] years) or at the “data”  level (so that, e.g., the cardholder number is deleted after [x] years but other data elements are kept for a longer period).  This is a point where it is particularly important to discuss with IT stakeholders what is actually achievable.
  • Maximum vs minimum retention periods:  Apart from setting maximum data retention periods, there may be  commercial, legal or operational reasons for the business to want to set minimum retention periods as well – e.g. for litigation defence purposes.  At an early stage, you’ll need to liaise with colleagues in HR, IT, Accounting and Legal teams to identify whether any such reasons exist and, if so, whether these should be reflected in your policy.
  • Other relevant considerations:  What other external factors will impact the data retention policy you design? Aside from legal and commercial requirements, is the business subject to, for example, sector-specific rules, agreements with local Works’ Councils, or even third party audit requirements (e.g. privacy seal certifications – particularly common in Germany)?  These factors all need to be identified and their potential impact on your data retention policy considered at an early stage.   

Getting it right at the beginning means that the subsequent stages of your data retention policy design and roll out should become much smoother – you’ll get the support you need from the business and you’ll have dealt with the difficult questions in a considered, strategic way upfront rather than in a piecemeal (and likely, inconsistent) fashion as the policy evolves.

And with so much to benefit from adopting a retention policy, why would you wait any longer?


The ECJ finds the Data Retention Directive disproportionate

Posted on April 11th, 2014 by

The Data Retention Directive has always been controversial. Born as it was after the tragedies of the 2004 Madrid and 2005 London bombings, it has faced considerable criticism concerning its scope and lengthy debate over whether it is a measured response to the perceived threat.  It is therefore no surprise that over the years a number of constitutional courts in EU Member States have struck down the implementing legislation in their local law as unconstitutional (e.g. Romania and Germany).  But now the ECJ, having considered references from Irish and Austrian courts, has ruled that the Directive is invalid since it is disproportionate in scope and incompatible with the rights to privacy and data protection under the EU Charter of Fundamental Rights.

What did the ECJ object to?

The ECJ’s analysis focused on the extent of the Directive’s interference with the fundamental rights under Article 7 (right to privacy) and Article 8 (right to data protection) of the Charter. Any limitation of fundamental rights must be provided for by law, be proportionate, necessary and genuinely meet objectives of general interest. The ECJ considered that the Directive’s interference was ‘wide-ranging and…particularly serious’. Yet the ECJ conceded that the interference did not extend to obtaining knowledge of the content of communications and that its material objective – the fight against serious crime – was an objective of general interest. Consequently the key issue was whether the measures under the Directive were proportionate and necessary to fulfil the objective.

For the ECJ, the requirements under the Directive do not fulfil the strictly necessary test. In particular, the ECJ emphasised the ubiquitous nature of the retention – all data, all means, all subscribers and registered users. The requirements affect individuals indiscriminately without exception. Furthermore, there are no objective criteria determining the limits of national authorities to access and use the data. All in all the interference is not limited to what is strictly necessary and consequently the interference is disproportionate.

Of particular importance given the on-going EU-US debate about Safe Harbor and US authorities’ access to EU data, is that the ECJ was also worried that the Directive did not require the retained data to be held within the EU. This suggests that the ECJ expects global companies to devise locally based EU data retention systems regardless of the cost or inconvenience.

What are the implications of the ECJ judgment?

This is a hugely significant decision coming as it does after the revelations prompted by Edward Snowden about the access by western law enforcement agencies to masses of data concerning individuals’ use of electronic resources. Although the Advocate General in his opinion last year suggested that an invalidity ruling on the Directive be suspended to allow the EU time to amend the legislation, the ECJ has not adopted this approach. Therefore, to all intents and purposes, the Directive is no longer EU law.

This ECJ judgment effectively overrules any implementing legislation such as the UK’s Data Retention Regulations. This does not mean that UK ISP’s and Telco’s won’t continue to collect and retain communications data for billing and other legitimate business purposes as permitted under the UK’s DPA and PEC Regs. But they no longer have to do so in compliance with the UK Data Retention Regulations. Indeed there could be a risk that continuing to hold data in compliance with the retention periods under the Regulations is actually a breach of the data protection principle not to retain personal data for longer than is necessary.

What does this mean for Telco’s/ ISPs?

It has been reported that the UK Government has already responded to the ECJ decision by saying that it is imperative that companies continue to retain data. Clearly the UK and other EU Governments would become very nervous if companies suddenly started deleting copious amounts of data due to the impact this could have on intelligence gathering to deal with detecting and preventing serious crime.  And in any event, in spite of what has happened at the ECJ, Telco’s and ISP’s are still required to comply with law enforcement disclosure requests concerning the communications data they retain.

Significantly, the ECJ did not rule that this kind of data collection and retention is never warranted. One of the main criticisms of the ECJ was that the Directive did not include clear and precise rules governing the scope and application of measures and did not include minimum safeguards. This suggests that the Directive could be redrafted (and relaunched) in a form that includes these rules and safeguards when requiring companies to retain communications data. Of course, this is likely to take some time. In the meantime UK companies could consider reverting to the retention periods set out in the voluntary code introduced under the Anti-terrorism, Crime and Security Act 2001.


Beware: Europe’s take on the notification of personal data breaches to individuals

Posted on April 10th, 2014 by

Article 29 Working Party (“WP 29“) has recently issued an Opinion on Personal Data Breach Notification (the “Opinion“). The Opinion focuses on the interpretation of the criteria under which individuals should be notified about the breaches that affect their personal data.

Before we analyse the take aways from the Opinion, let’s take a step back: are controllers actually required to notify personal data breaches?

In Europe, controllers have, for a while now, been either legally required or otherwise advised to consider notifying personal data breaches to data protection regulators and/or subscribers or individuals.

Today, the only EU-wide personal data breach notification requirement derives from Directive 2002/58/EC, as amended by Directive 2009/136/EC, (the “e-Privacy Directive“) and  applies to providers of publicly available electronic communications services. In some EU member states (for example, in Germany), this requirement has been extended to controllers in other sectors or to all  controllers. Similarly, some data protection regulators have issued guidance whereby controllers are advised to report data breaches under certain circumstances.

Last summer, the European Commission adopted Regulation 611/2013 (the “Regulation“), (see our blog regarding the Regulation here), which  sets out the technical implementing measures concerning the circumstances, format and procedure for data breach notification required under Article 4 of the e-Privacy Directive.

In a nutshell, providers  must notify individuals of breaches that are likely to adversely affect their personal data or privacy without undue delay and taking account of: (i) the nature and content of the personal data concerned; (ii) the likely consequences of the personal data breach for the individual concerned (e.g. identify theft, fraud, distress, etc); and (iii) the circumstances of the personal data breach. Providers are exempt to notify individuals (not regulators) if they have demonstrated to the satisfaction of the data protection regulator that they have implemented appropriate technological protection measures to render that data unintelligible to any person who is not authorised to access it.

The Opinion provides guidance on how controllers may interpret this notification requirement by analysing 7 practical scenarios of breaches that will meet the ‘adverse effect’ test. For each of them, the  WP 29 identifies the potential consequences and adverse effects of the breach and the security safeguards which might have reduced the risk of the breach occurring in the first place or, indeed, might have exempted the controller from notifying the breach to individuals all together.

From the Opinion, it is worth highlighting:

The test. The ‘adverse effect’ test is interpreted broadly to include ‘secondary effects’. The  WP 29 clearly states that all the potential consequences and potential adverse effects are to be taken into account. This interpretation may be seen a step too far as not all ‘potential’ consequences are ‘likely’ to happen and will probably lead to a conservative interpretation of the notification requirement across Europe.

Security is key. Controllers should put in place security measures that are appropriate to the risk presented by the processing with emphasis on the implementation of those controls rendering data unintelligible. Compliance with data security requirements should result in the mitigation of the risks of personal data breaches and even, potentially, in the application of the exception to notify individuals about the breach. Examples of security measures that are identified to be likely to reduce the risk of a breach occurring are: encryption (with strong key); hashing (with strong key), back-ups, physical and logical access controls and regular monitoring of vulnerabilities.

Procedure. Controllers should have procedures in place to manage personal data breaches. This will involve a detailed analysis of the breach and its potential consequences. In the Opinion, the  data breaches fall under three categories, namely, availability, integrity or confidentiality breaches. The application of this model may help controllers analyse the breach too.

How many individuals? The number of individuals affected by the breach should not have a bearing on the decision of whether or not to notify them.

Who must notify? It is explicitly stated in the Opinion that breach notification constitutes good practice for all controllers, even for those who are currently not required to notify by law.

There is a growing consensus in Europe that it is only a matter of time before an EU-wide personal data breach notification requirement that applies to all controllers (regardless of the sector they are in) is in place. Indeed, this will be the case if/when the proposed General Data Protection Regulation is approved. Under it, controllers would be subject to strict notification requirements both to data protection regulators and individuals. This Opinion provides some insight into  how the  European regulators may interpret these requirements under the General Data Protection Regulation.

Therefore, controllers will be well-advised to prepare for what is coming their way (see previous blog here). Focus should be on the application of security measures (in order to prevent a breach and the adverse effects to individuals once a breach has occurred) and on putting procedures in place to effectively manage breaches. Start today, burying the head in the sand is just no longer an option.


Article 29 Working Party issues draft model clauses for processor-to-subprocessor data transfers

Posted on April 9th, 2014 by

On 21st March 2014, the Article 29 Working Party (“WP 29″) issued a working document (WP 214) proposing new contractual clauses for cross-border transfers between an EU-based processor and a non-EU-based sub-processor (“draft model clauses”). This document addresses the situation where personal data are initially transferred by a controller to a processor within the European Union (“EU”) and are subsequently transferred by the processor to a sub-processor located outside the EU.

Back in 2010, the EU Commission adopted a revised version of its model clauses for transfers between a controller in the EU and a processor outside the EU, partly to integrate new provisions on sub-processing. However, it deliberately chose not to apply these new model clauses to situations whereby a processor established in the EU and performing the processing of personal data on behalf of a controller established in the EU subcontracts his processing operations to a sub-processor established in a third country (see recital 23 of the EU Commission’s Decision 2010/87/EU).

Absent Binding Corporate Rules, many EU data processors were left with few options for transferring the data outside the EU. This issue is particularly relevant in the context of a growing digital economy where more and more companies are transferring their data to cloud computing service providers who are often based outside the EU. Negotiating ad hoc model clauses on a case-by-case basis with the DPAs seemed to be the only solution available. This is precisely what the Spanish DPA undertook in 2012 when it adopted a specific set of standard contractual clauses for processor–to-sub-processor transfers and put in place a new procedure allowing data processors based in Spain to obtain authorizations for transferring data processed on behalf of their customers (the data controllers) to sub-processors based outside the EU.

This has inspired the WP 29 to use the Spanish model as a basis for preparing draft ad hoc model clauses for transfers from an EU data processor to a non-EU sub-processor that could be used by any processor established in the EU. However, these draft model clauses have yet to be formally adopted by the European Commission before they can be used by companies and it may take a while before the EU Commission adopts a new official set of model clauses for data processors. Meanwhile, companies cannot rely on the draft model clauses to obtain approval from their DPAs to transfer data outside the EU. While the WP 29′s document certainly paves the way in the right direction, it remains to be seen how these draft model clauses will be received by the business sector and whether they can work in practice.

Below is a list of the key provisions under the draft model clauses for data processors:

  • Structure: the overall structure and content of these draft clauses are similar to those that already exist under the controller-to-processor model clauses, but have been adapted to the context of transfers between a processor and sub-processor.
  • Framework Contract: the EU data processor must sign a Framework Contract with its controller, which contains a detailed list of obligations (16 in total) specified in the draft model clauses – including restrictions on onward sub-processing.  The practical effect of this could be to see the service terms between controllers and their EU processors expand to include a substantially greater number of data protection commitments, all with a view to facilitating future extra-EU transfers by the processor to international sub-processors under these model clauses.
  • Sub-processing: the EU processor must obtain its controller’s prior written approval in order to subcontract data processing activities to non-EU processors. It is up to the controller to decide, under the Framework Contract, whether it grants a general consent up front for all sub-processing activities, or whether a specific case-by-case approval is required each time the EU processor intends to subcontract its activities. The same applies to the sub-processing by the importing non-EU sub-processors. Any non-EU sub-processor must be contractually bound by the same obligations (including the technical and organisational security measures) as those that are imposed on the EU processor under the Framework Agreement.
  • List of sub-processing agreements: the EU processor must keep an updated list of all sub-processing agreements concluded and notified to it by its non-EU sub-processor at least once per year and must make this list available to the controller.
  • Third party beneficiary clause: depending on the situation, the data subject has three options to enforce model clause breaches against data processing parties to it – including initially against the exporting EU data processor (where the controller has factually disappeared or has ceased to exist in law), the importing non-EU data processor (where both the controller and the EU data processor have factually disappeared or have ceased to exist in law), or any subsequent sub-processor (where the controller, the exporting EU data processor and the importing non-EU data processor have all factually disappeared or have ceased to exist in law).
  • Audits: the exporting EU data processor must agree, at the request of its controller, to submit its data processing facilities for audit of the processing activities covered by the Framework Contract, which shall be carried out by the controller himself, or alternatively, an independent inspection body selected by the controller. The DPA competent for the controller has the right to conduct an audit of the exporting EU data processor, the importing non-EU data processor, and any subsequent sub-processor under the same conditions as those that would apply to an audit of the controller. The recognition of third party independent audits is especially important for cloud industry businesses who – for security and operational reasons – will often be reluctant to have clients conduct on-site audits but will typically be more comfortable holding themselves to independent third party audits.
  • Disclosure of the Framework Contract: the controller must make available to the data subjects and the competent DPA upon request a copy of the Framework Contract and any sub-processing agreement with the exception of commercially sensitive information which may be removed. In practice, it is questionable how many non-EU suppliers will be willing to sign sub-processing agreements with EU data processors on the understanding that provisions within those agreements could end up being disclosed to regulators and other third parties.
  • Termination of the Framework Contract: where the exporting EU processor, the importing non-EU data processor or any subsequent sub-processor fails to fulfil their model clauses obligations, the controller may suspend the transfer of data and/or terminate the Framework Contract.

Click here to access the WP 29′s working document WP 214 on draft ad hoc contractual clauses “EU data processor to non-EU sub-processor”.

Click here to view the article published in the World Data Protection Report.


Complex Cloud Contracting

Posted on March 26th, 2014 by

The greatest pleasure, and the greatest challenge, of being a privacy lawyer is the need to be both an ethicist and a pragmatist.  Oftentimes, I find myself advising companies not just on what is the legal thing to do, but what is the right thing to do (and, no, the two aren’t always one and the same); while, on other occasions, my task is to find solutions to real or imagined business impediments presented by the law.

Nowhere is this dichotomy more apparent than when advising on cloud deals.  The future is cloud and mobile, as someone once said.  So it seems an oddity that privacy laws are all too often interpreted in ways that impair cloud adoption and utilization.  This oddity is perhaps most apparent when negotiating cloud deals, where two parties who are in commercial agreement and want to realize the benefits of a cloud relationship are unable to reach contractual agreement over basic data protection terms.

This failure to reach contractual agreement is so often due to a misunderstanding, or (sometimes) a perverse interpretation of, EU data protection requirements, that I thought I’d use this post to set the record straight.  The following is necessarily broad brush, but hopefully paints a picture of the key things to consider in cloud deals and how to address them:

1.  What data protection terms does the law require?  In most cloud relationships, the service provider will be a “data processor” and its client the “data controller”.  In this type of relationship, the client is legally obligated to impose two key requirements on the service provider – first, that the service provider must act only on its instructions; second, that the service provider must have in place “appropriate” security.  There’s no point negotiating these.  Just accept them as a legal necessity and move on.

2.  What about Germany?  Germany is a huge market for cloud contracting, but its data privacy laws are notoriously strict.  If you’re a cloud provider rolling out a pan-EU service, you have to address German data privacy requirements as part of your offering or risk not doing business in a major EU market.  In addition to the two requirements just described above, Germany also mandates the need for precise “technical and organisational” security measures to be in place for the cloud service and the granting of audit rights in favour of the cloud client.  These need to be addressed either within the standard EU ts&cs for the cloud service or, alternatively, by way of bespoke terms just for German deals.

3.  Audit rights???  Yes, that’s right. Certain EU territories, like Germany, expect that cloud clients should have audit rights over their cloud providers.  To most cloud providers, the idea of granting audit rights under their standard terms is an anathema.  Imagine a provider with thousands of clients – you only need a small fraction of those clients to exercise audit rights at any one time for the business disruption to be overwhelming.  Not only that, but allowing multiple clients onsite and into server rooms for audit purposes itself creates a huge security risk. So what’s the solution?  A common one is that many cloud service providers have these days been independently audited against ISO and SSAE standards.  Committing in the contract to maintain recognised third party audit certifications throughout the duration of the cloud deal – possibly even offering to provide a copy of the audit certification or a summary of the audit report – will (and rightly should) satisfy many cloud clients.

4.  The old “European data center” chestnut.  I’ve been in more than a few negotiations where there’s been a mistaken belief that the cloud service provider needs to host all data in Europe in order for the service to be “legal” under European data protection law.  This is a total fallacy.  Cloud service providers can (and, make no mistake, will) move data anywhere in the world – often in the interests of security, back-ups, support and cost efficiency.  What’s more, the law permits this – though it does require that some manner of legal “data export” solution first be implemented for data being transferred out of Europe.  There are a number of solutions available – from model clauses to safe harbor to Binding Corporate Rules.  Cloud clients need to check their service providers have one of these solutions in place and that it covers the data exports in question but, so long as they do, then there’s no reason why data cannot be moved around internationally for service-related reasons.

5.  Security.  The law requires cloud clients to ensure that their service providers have implemented “appropriate” security.  The thing is, cloud clients often aren’t best able to assess whether their cloud provider’s security is or is not “appropriate” – one of the commonly cited reasons for outsourcing to the cloud in the first place is to take the benefit of the greater security expertise that cloud providers offer.  To further complicate matters, some territories – like Germany, Poland and Spain – have precise data security rules.  It’s highly unlikely that a cloud provider will ever tailor its global IT infrastructure to address nationally-driven requirements of just one or two territories, so outside of heavily-regulated sectors, there’s little point trying to negotiate for those.  Instead, cloud clients should look to other security assurances the cloud provider can offer – most notably, whether it maintains ISO and SSAE certification (see above!).

6.  Subcontracting.  Cloud suppliers subcontract: it’s a fact of life.  Whether to their own group affiliates or externally to third party suppliers, the likelihood is that the party concluding the cloud contracting will not be (solely) responsible for performing it.  The question inevitably arises as to whether the supplier needs its client’s consent to subcontract: the short answer is, generally, yes, but there’s no reason why a general consent to subcontract can’t be obtained upfront in the contract.  At the same time, however, the cloud customer will want assurances that its data won’t be outsourced to a subcontractor with lax data protection standards, so any such consent should be carefully conditioned on the cloud provider flowing down its data protection responsibilities and committing to take responsibility for managing the subcontractor’s compliance.

7.  What other terms should be in a cloud contract?  In addition to the points already discussed, it’s critical that cloud providers have in place a robust data breach response mechanism – so that they detect security intrusions asap and inform the cloud client promptly, giving it the opportunity to manage its own fallout from the breach and address any legal data breach notification requirements it may be under.  In addition, cloud providers should be expected to inform their clients (where legally permitted to do so) about any notices or complaints they receive concerning their hosting or processing of their client’s data – the client will generally be on the hook for responding to these, so it’s important it receives these notices promptly giving it adequate time to respond.

So there’s no reason that data protection should be holding those deals up!  All of the issues described above have straightforward solutions that should be palatable to both cloud clients and providers alike.  Remember: good data protection and good business are not mutually exclusive – but realistic, compatible goals.


European Parliament votes in favour of data protection reform

Posted on March 21st, 2014 by

On 12 March 2014, the European Parliament (the “Parliament”) overwhelmingly voted in favour of the European Commission’s proposal for a Data Protection Regulation (the “Data Protection Regulation”) in its plenary assembly. In total 621 members of Parliament voted for the proposals and only 10 against. The vote cemented the Parliament’s support of the data protection reform, which constitutes an important step forward in the legislative procedure. Following the vote, Viviane Reding – the EU Justice Commissioner – said that “The message the European Parliament is sending is unequivocal: This reform is a necessity, and now it is irreversible”. While this vote is an important milestone in the adoption process, there are still several steps to go before the text is adopted and comes into force.

So what happens next?

Following the Civil Liberties, Justice and Home Affairs (LIBE) Committee’s report published in October 2013 (for more information on this report – see this previous article), this month’s vote  means that the Council of the European Union (the “Council”) can now formally conduct its reading of the text based on the Parliament’s amendments. Since the EU Commission made its proposal, preparatory work in the Council has been running in parallel with the Parliament. However, the Council can only adopt its position after the Parliament has acted.

In order for the proposed Data Protection Regulation to become law, both the Parliament and the Council must adopt the text in what is called the “ordinary legislative procedure” – a process in which the decisions of the Parliament and the Council have the same weight. The Parliament can only begin official negotiations with the Council as soon as the Council presents its position. It seems unlikely that the Council will accept the Parliament’s position and, on the contrary, will want to put forward its own amendments.

In the meantime, representatives of the Parliament, the Council and the Commission will probably organise informal meetings, the so-called “trilogue” meetings, with a view to reaching a first reading agreement.

The EU Justice Ministers have already met several times in Council meetings in the past months to discuss the data protection reform. Although there seems to be a large support between Member States for the proposal, they haven’t yet reached an agreement over some of the key provisions, such as the “one-stop shop” rule. The next meeting of the Council ministers is due to take place in June 2014.

Will there be further delays?

As the Council has not yet agreed its position, the speed of the development of the proposed regulation in the coming months largely depends on this being finalised. Once a position has been reached by the Council then there is also the possibility that the proposals could be amended further. If this happens, the Parliament may need to vote again until the process is complete.

Furthermore, with the elections in the EU Parliament coming up this May, this means that the whole adoption process will be put on hold until a new Parliament comes into place and a new Commission is approved in the autumn this year. Given these important political changes, it is difficult to predict when the Data Protection Regulation will be finally adopted.

It is worth noting, however, that the European heads of state and government publicly committed themselves to the ‘timely’ adoption of the data protection legislation by 2015 – though, with the slow progress made to date and work still remaining to be done, this looks a very tall order indeed.


CNIL: a regulator to watch in 2014

Posted on March 18th, 2014 by

Over the years, the number of on-site inspections by the French DPA (CNIL) has been on a constant rise. Based on the CNIL’s latest statistics (see CNIL’s 2013 Annual Activity Report), 458 on-site inspections were carried out in 2012, which represents a 19 percent increase compared with 2011. The number of complaints has also risen to 6,000 in 2012, most of which were in relation to telecom/Internet services, at 31 percent. In 2012, the CNIL served 43 formal notices asking data controllers to comply. In total, the CNIL pronounced 13 sanctions, eight of which were made public. In the majority of cases, the sanction pronounced was a simple warning (56 percent), while fines were pronounced in only 25 percent of the cases.

The beginning of 2014 was marked by a landmark decision of the CNIL. On January 3, 2014, the CNIL pronounced a record fine against Google of €150,000 ($204,000) on the grounds that the terms of use available on its website since March 1, 2012, allegedly did not comply with the French Data Protection Act. Google was also required to publish this sanction on the homepage of within eight days of it being pronounced. Google appealed this decision, however, on February 7th, 2014, the State Council (“Conseil d’Etat”) rejected Google’s claim to suspend the publication order.

Several lessons can be learnt from the CNIL’s decision. First, that the CNIL is politically motivated to hit hard on the Internet giants, especially those who claim that their activities do not fall within the remit of the French law. No, says the CNIL. Your activities target French consumers, and thus, you must comply with the French Data Protection Act even if you are based outside the EU. This debate has been going on for years and was recently discussed in Brussels within the EU Council of Ministers’ meeting in the context of the proposal for a Data Protection Regulation. As a result, Article 4 of the Directive 95/46/EC could soon be amended to allow for a broader application of European data protection laws to data controllers located outside the EU.

Second, despite it being the highest sanction ever pronounced by the CNIL, this is hardly a dissuasive financial sanction against a global business with large revenues. Currently, the CNIL cannot pronounce sanctions above €150,000 or €300,000 ($410,000) in case of a second breach within five years from the first sanction pronounced, whereas some of its counterparts in other EU countries can pronounce much heavier sanctions; e.g., last December, the Spanish DPA pronounced a €900,000 ($1,230,000) fine against Google. This could soon change, however, in light of an announcement made by the French government that it intends to introduce this year a bill on “the protection of digital rights and freedoms,” which could significantly increase the CNIL’s enforcement powers.

Furthermore, it seems that the CNIL’s lobbying efforts within the French Parliament are finally beginning to pay off. A new law on consumer rights came into force on 17 March 2014, which amends the Data Protection Act and grants the CNIL new powers to conduct online inspections in addition to the existing on-site inspections. This provision gives the CNIL the right, via an electronic communication service to the public, “to consult any data that are freely accessible, or rendered accessible, including by imprudence, negligence or by a third party’s action, if required, by accessing and by remaining within automatic data protection systems for as long as necessary to conduct its observations.” This new provision opens up the CNIL’s enforcement powers to the digital world and, in particular, gives it stronger powers to inspect the activities of major Internet companies. The CNIL says that this law will allow it to verify online security breaches, privacy policies and consent mechanisms in the field of direct marketing.

Finally, the Google case is a good example of the EU DPAs’ recent efforts to conduct coordinated cross-border enforcement actions against multinational organizations. In the beginning of 2013, a working group was set up in Paris, led by the CNIL, for a simultaneous and coordinated enforcement action against Google in several EU countries. As a result, Google was inspected and sanctioned in multiple jurisdictions, including Spain and The Netherlands. Google is appealing these sanctions.

As the years pass by, the CNIL continues to grow and to become more resourceful. It is also more experienced and better organized. The CNIL is already very influential within the Article 29 Working Party, as recently illustrated by the Google case, and Isabelle Falque-Pierrotin, the chairwoman of the CNIL, was recently elected chair of the Article 29 Working Party. Thus, companies should pay close attention to the actions of the CNIL as it becomes a more powerful authority in France and within the European Union.

This article was first published in the IAPP’s Privacy Tracker on 27 February 2014 and was updated on 18th March 2014.


Progress update on the EU Cybersecurity Strategy

Posted on March 13th, 2014 by


On 28 February 2014, the European Commission hosted a “High Level Conference on the EU Cybersecurity Strategy” in Brussels.  The conference provided an opportunity for EU policy-makers, industry representatives and other interested parties to assess the progress of the EU Cybersecurity Strategy, which was adopted by the European Commission on 7 February 2013.

Keynote speech by EU Digital Agenda Commissioner Neelie Kroes

The implementation of the EU Cybersecurity Strategy comes at a time when public and private actors face escalating cyber threats.  During her keynote speech at the conference, Commissioner Kroes reiterated the dangers of weak cybersecurity measures by asserting that “without security, there is no privacy.

She further highlighted the reputational and financial impact of cyber threats, commenting that over 75% of small businesses and 93% of large businesses have suffered a cyber breach, according to a recent study.  However, Commissioner Kroes also emphasised that effective EU cybersecurity practices could constitute a commercial advantage for the 28 MemberState bloc in an increasingly interconnected global marketplace.

Status of the draft EU Cybersecurity Directive

The EU Cybersecurity Strategy’s flagship legal instrument is draft Directive 2013/0027 concerning measures to ensure a high common level of network and information security across the Union (“draft EU Cybersecurity Directive”).  In a nutshell, the draft EU Cybersecurity Directive seeks to impose certain mandatory obligations on “public administrations” and “market operators” with the aim of harmonising and strengthening cybersecurity across the EU. In particular, it includes an obligation to report security incidents to the competent national regulator.

The consensus at the conference was that further EU institutional reflection is required on some aspects of the draft EU Cybersecurity Directive, such as (1) the scope of obligations, i.e., which entities are included as “market operators”; (2) how Member State cooperation would work in practice; (3) the role of the National Competent Authorities’ (“NCAs”); and (4) the criminal dimension and notification requirement to law enforcement authorities by NCAs.  The scope of obligations is a particularly contentious issue as EU decision-makers consider whether to include certain entities, such as software manufacturers, hardware manufacturers, and internet platforms, within the scope of the Directive.

The next few months will be a crucial period for the legislative passage of the draft law.  Indeed, the European Parliament voted on 13 March 2014 in the Plenary session to adopt its draft Report on the Directive.  The Council will now spend March – May 2014 working on the basis of the Parliament’s report to achieve a Council “common approach”.  The dossier will then likely be revisited after the European Parliament elections in May 2014.  The expected timeline for adoption remains “December 2014″ but various decision-making scenarios are possible depending on the outcome of the elections.

Once adopted, Member States will have 18 months to transpose the Directive into national law (meaning an approximate deadline of mid-2016).  As a minimum harmonisation Directive, Member States could go beyond the provisions of the adopted Directive with their national transpositions, for instance, by reinstating internet platforms within the definition of a “market operator”. 

One of the challenges for organizations will be achieving compliance with possibly conflicting notification requirements between the draft EU Cybersecurity Directive (i.e., obligation to report security incidents to the competent national regulator), the existing ePrivacy Directive (i.e., obligation for telecom operators to notify personal data breaches to the regulator and to individuals affected) and, if adopted, the EU Data Protection Regulation (i.e., obligation for all data controllers to notify personal data security breaches to the regulator and to individuals affected).  So far, EU legislators have not provided any guidance as to how these legal requirements would coexist in practice.

Industry’s perspective on the EU Cybersecurity Strategy

During the conference, representatives from organisations such as Belgacom and SWIFT highlighted the real and persistent threat facing companies. Calls were made for international coordination on cybersecurity standards and laws to avoid conflicting regulatory requirements.  Interventions also echoed the earlier sentiments of Commissioner Kroes in that cybersecurity offers significant growth opportunities for EU industry. 

Business spoke of the need to “become paranoid” about the cyber threat and implement “security by design” to protect data.  Finally, trust, collaboration and cooperation between Member States, public and private actors were viewed as essential to ensure EU cyber resilience.


How do EU and US privacy regimes compare?

Posted on March 5th, 2014 by

As an EU privacy professional working in the US, one of the things that regularly fascinates me is each continent’s misperception of the other’s privacy rules.  Far too often have I heard EU privacy professionals (who really should know better) mutter something like “The US doesn’t have a privacy law” in conversation; equally, I’ve heard US colleagues talk about the EU’s rules as being “nuts” without understanding the cultural sensitivities that drive European laws.

So I thought it would be worth dedicating a few lines to compare and contrast the different regimes, principally to highlight that, yes, they are indeed different, but, no, you cannot draw a conclusion from these differences that one regime is “better” (whatever that means) than the other.  You can think of what follows as a kind of brief 101 in EU/US privacy differences.

1.  Culturally, there is a stronger expectation of privacy in the EU.  It’s often said that there is a stronger cultural expectation of privacy in the EU than the US.  Indeed, that’s probably true.   Privacy in the EU is protected as a “fundamental right” under the European Union’s Charter of Fundamental Rights – essentially, it’s akin to a constitutional right for EU citizens.  Debates about privacy and data protection evoke as much emotion in the EU as do debates about gun control legislation in the US.

2.  Forget the myth: the US DOES have data protection laws.  It’s simply not true that the US doesn’t have data protection laws.  The difference is that, while the EU has an all-encompassing data protection framework (the Data Protection Directive) that applies across every Member State, across all sectors and across all types of data, the US has no directly analogous equivalent.  That’s not the same thing as saying the US has no privacy laws – it has an abundance of them!  From federal rules designed to deal with specific risk scenarios (for example, collection of child data online is regulated under the Children’s Online Privacy Protection Act), to sector-specific rules (Health Insurance Portability and Accountability Act for health-related information and the Gramm-Leach-Bliley Act for financial information), to state-driven rules (the California Online Privacy Protection Act in California, for example – California, incidentally, also protects individuals’ right to privacy under its constitution).  So the next time someone tells you that the US has no privacy law, don’t fall for it – comparing EU and US privacy rules is like comparing apples to a whole bunch of oranges.

3.  Class actions.  US businesses spend a lot of time worrying about class actions and, in the privacy realm, there have been multiple.  Countless times I’ve sat with US clients who agonise over their privacy policy drafting to ensure that the disclosures they make are sufficiently clear and transparent in order to avoid any accusation they may have misled consumers.  Successful class actions can run into the millions of $$$ and, with that much potential liability at stake, US businesses take this privacy compliance risk very seriously.  But when was the last time you heard of a successful class action in the EU?  For that matter, when was the last time you heard of ANY kind of award of meaningful damages to individuals for breaches of data protection law?

4.  Regulatory bark vs. bite.  So, in the absence of meaningful legal redress through the courts, what can EU citizens do to ensure their privacy rights are respected?  The short answer is complain to their national data protection authorities, and EU data protection authorities tend to be very interested and very vocal.  Bodies like the Article 29 Working Party, for example, pump out an enormous volume of regulatory guidance, as do certain national data protection authorities, like the UK Information Commissioner’s Office or the French CNIL. Over in the US, American consumers also have their own heavyweight regulatory champion in the form of Federal Trade Commission which, by using its powers to take enforcement against “unfair and deceptive practices” under the FTC Act, is getting ever more active in the realm of data protection enforcement.  And look at some of the settlements it has reached with high profile companies – settlements that, in some cases, have run in excess of US$20m and resulted in businesses having to subject themselves to 20 year compliance audits.  By contrast, however vocal EU DPAs are, their powers of enforcement are typically much more limited, with some even lacking the ability to fine.

So those are just some of the big picture differences, but there are so many more points of detail a well-informed privacy professional ought to know – like how the US notion of “personally identifiable information” contrasts with EU “personal data”, why the US model of relying on consent to legitimise data processing is less favoured in the EU, and what the similarities and differences are between US “fair information practice principles” and EU “data protection principles”.

That’s all for another time, but for now take away this:  while they may go about it in different ways, the EU and US each share a common goal of protecting individuals’ privacy rights.  Is either regime perfect?  No, but each could sure learn a lot from the other.





CNIL issues new guidelines on the processing of bank card details

Posted on February 27th, 2014 by

On February 25, 2014, the French Data Protection Authority (“CNIL”) issued a press release regarding new guidelines adopted last November on the processing of bank card details relating to the sale of goods and the provision of services at a distance (the “Guidelines”). Due to the increase of on-line transactions and the higher number of complaints received by the CNIL from customers in recent years, the CNIL decided to update and repeal its previous guidelines, which dated from 2003. The new guidelines apply to all types of bank cards including private payment cards and credit cards.

Purposes of processing

The CNIL defines the main purpose of using a bank card number as processing a transaction with a view to delivering goods or providing a service in return for payment. In addition, bank card details may be processed for the following purposes:

  • to reserve a good or service;
  • to create a payment account to facilitate future payments on a merchant’s website;
  • to enable payment service providers to offer dedicated payment solutions at a distance (e.g., virtual cards or wallets, rechargeable accounts, etc.); and
  • to combat fraud.

Types of data collected

As a general rule, the types of data that are strictly necessary to process online payments should be limited to:

  • the bank card number;
  • the expiry date; and
  • the 3 digit cryptogram number on the back of the card.

The cardholder’s identity must not be collected, unless it is necessary for a specific and legitimate purpose, such as to combat fraud.

Period of retention

Bank card details may only be stored for the duration that is necessary to process the transaction, and must be deleted once the payment has taken place (or, where applicable, at the end of the period corresponding to the right of withdrawal). Following this period, the bank card details may be archived and kept for 13 months (or 15 months in the case of a deferred debit card) for evidence purposes (e.g., in case of a dispute over a transaction).

Beyond this period, the bank card details may be kept only if the cardholder’s prior consent is obtained or to prevent fraudulent use of the card. In particular, the merchant must obtain the customer’s prior consent in order to create a payment account that remembers the customer’s bank card details for future payments.

However, the CNIL considers that the 3-digit cryptogram on the card is meant to verify that the cardholder is in possession of his/her card, and thus, it is prohibited to store this number after the end of the transaction, including for future payments.

Security measures

Due to the risk of fraud, controllers must implement appropriate security measures, including preventing unauthorized access to, or use of, the data. These security measures must comply with applicable industry standards and requirements, such as the Payment Card Industry Data Security Standards (PCI DSS), which must be adopted by all organizations with payment card data.

The CNIL recommends that the customer’s bank card details are not stored on his/her terminal equipment (e.g., computer, smartphone) due to the lack of appropriate security measures. Furthermore, bank card numbers cannot be used as a means of customer identification.

For security reasons (including those that are imposed on the cardholder), the controller (or processor) cannot request a copy of the bank card to process a payment.

Finally, the CNIL recommends notifying the cardholder if his/her bank card details are breached in order to limit the risk of fraudulent use of the bank card details (e.g., to ask the bank to block the card if there is a risk of fraud).

Future legislation

In light of the anticipated adoption of the Data Protection Regulation, organizations will face more stringent obligations, including privacy-by-design, privacy impact assessments and more transparent privacy policies.