Archive for the ‘Data as an asset’ Category

Incentivising compliance through tangible benefits

Posted on September 29th, 2013 by



The secret of compliance is motivation. That motivation does not normally come from the pleasure and certainty derived from ticking all possible boxes on a compliance checklist. Although, having said that, I have come across sufficiently self-disciplined individuals who seem to make a virtue out of achieving the highest degree of data privacy compliance within their organisations. However, this is quite exceptional. In truth, it is very difficult for any organisation – big or small, in the private or public sector – to get its act together simply out of fear of non-compliance with the law. Putting effective policies and procedures in place is never the result of a sheer drive to avoid regulatory punishment. Successful legal compliance is, more often than not, the result of presenting dry and costly legal obligations as something else. In particular, something that provides tangible benefits.

The fact that personal information is a valuable asset is demonstrated daily. Publicly quoted corporate powerhouses whose business model is entirely dependent on people’s data evidence the present. Innovative and fast growing businesses in the tech, digital media, data analytics, life sciences and several other sectors show us the future. In all cases, the consistent message coming not just from boardrooms, but from users, customers and investors, is that data fuels success and opportunity. Needless to say, most of that data is linked to each of us as individuals and, therefore, its use has implications in one way or another for our privacy. So, when looked at from the point of view of an organisation which wishes to exploit that data, regulating data privacy equates regulating the exploitation of an asset.

The term ‘exploitation’ instinctively brings to mind negative connotations. When talking about personal information, whose protection – as is well known – is regarded as a fundamental human right in the EU, the term exploitation is especially problematic. The insinuation that something of such an elevated legal rank is being indiscriminately used to someone’s advantage makes everyone feel uncomfortable. But what about the other meaning of the word? Exploitation is also about making good use of something by harnessing its value. Many responsible and successful businesses, governments and non-profit organisations look at exploiting their assets as a route to sustainability and growth. Exploiting personal information does not need to be negative and, in fact, greater financial profits and popular support – and ultimately, success – will come from responsible, but effective ways of leveraging that asset.

For that reason, it is possible to argue that the most effective way of regulating the exploitation of data as an asset is to prove that responsible exploitation brings benefits that organisations can relate to. In other words, policy making in the privacy sphere should emphasise the business and social benefits – for the private and public sector respectively – of achieving the right level of legal compliance. The rest is likely to follow much more easily and all types of organisations – commercial or otherwise – will endeavour to make the right decisions about the data they collect, use and share. Right for their shareholders, but also for their customers, voters and citizens. The message for policy makers is simple: bring compliance with the law closer to the tangible benefits that motivate decision makers.

This article was first published in Data Protection Law & Policy in September 2013 and is an extract from Eduardo Ustaran’s forthcoming book The Future of Privacy, which is due to be published in November 2013.

Global protection through mutual recognition

Posted on July 23rd, 2013 by



At present, there is a visible mismatch between the globalisation of data and the multinational approach to privacy regulation. Data is global by nature as, regulatory limits aside, it runs unconstrained through wired and wireless networks across countries and continents. Put in a more poetic way, a digital torrent of information flows freely in all possible directions every second of the day without regard for borders, geographical distance or indeed legal regimes and cultures. Data legislation on the other hand is typically attached to a particular jurisdiction – normally a country, sometimes a specific territory within a country and occasionally a selected group of countries. As a result, today, there is no such thing as a single global data protection law that follows the data as it makes its way around the world.

However, there is light at the end of the tunnel. Despite the current trend of new laws in different shapes and flavours emerging from all corners of the planet, there is still a tendency amongst legislators to rely on a principles-based approach, even if that translates into extremely prescriptive obligations in some cases – such as Spain’s applicable data security measures depending on the category of data or Germany’s rules to include certain language in contracts for data processing services. Whether it is lack of imagination or testimony to the sharp brains behind the original attempts to regulate privacy, it is possible to spot a common pedigree in most laws, which is even more visible in the case of any international attempts to frame privacy rules.

When analysed in practice and through the filter of distant geographical locations and moments in time, it is definitely possible to appreciate the similarities in the way privacy principles have been implemented by fairly diverse regulatory frameworks. Take ‘openness’ in the context of transparency, for example. The words may be slightly different and in the EU directive, it may not be expressly named as a principle, but it is consistently everywhere – from the 1980 OECD Guidelines to Safe Harbor and the APEC Privacy Framework. The same applies to the idea of data being collected for specified purposes, being accurate, complete and up to date, and people having access to their own data. Seeing the similarities or the differences between all of these international instruments is a matter of mindset. If one looks at the words, they are not exactly the same. If one looks at the intention, it does not take much effort to see how they all relate.

Being a lawyer, I am well aware of the importance of each and every word and its correct interpretation, so this is not an attempt to brush away the nuances of each regime. But in the context of something like data and the protection of all individuals throughout the world to whom the data relates, achieving some global consistency is vital. The most obvious approach to resolving the data globalisation conundrum would be to identify and put in place a set of global standards that apply on a worldwide basis. That is exactly what a number of privacy regulators backed by a few influential thinkers tried to do with the Madrid Resolution on International Standards on the Protection of Personal Data and Privacy of 2009. Unfortunately, the Madrid Resolution never became a truly influential framework. Perhaps it was a little too European. Perhaps the regulators ran out of steam to press on with the document. Perhaps the right policy makers and stakeholders were not involved. Whatever it was, the reality is that today there is no recognised set of global standards that can be referred to as the one to follow.

So until businesses, politicians and regulators manage to crack a truly viable set of global privacy standards, there is still an urgent need to address the privacy issues raised by data globalisation. As always, the answer is dialogue. Dialogue and a sense of common purpose. The USA and the EU in particular have some important work to do in the context of their trade discussions and review of Safe Harbor. First they must both acknowledge the differences and recognise that an area like privacy is full of historical connotations and fears. But most important of all, they must accept that principles-based frameworks can deliver a universal baseline of privacy protection. This means that efforts must be made by all involved to see what Safe Harbor and EU privacy law have in common – not what they lack. It is through those efforts that we will be able to create an environment of mutual recognition of approaches and ultimately, a global mechanism for protecting personal information.

This article was first published in Data Protection Law & Policy in July 2013.

The conflicting realities of data globalisation

Posted on June 17th, 2013 by



The current data globalisation phenomenon is largely due to the close integration of borderless communications with our everyday comings and goings. Global communications are so embedded in the way we go about our lives that we are hardly aware of how far our data is travelling every second that goes by. But data is always on the move and we don’t even need to leave home to be contributing to this. Ordinary technology right at our fingertips is doing the job for us leaving behind an international trail of data – some more public than other.

The Internet is global by definition. Or more accurately, by design. The original idea behind the Internet was to rely on geographically dispersed computers to transmit packets of information that would be correctly assembled at destination. That concept developed very quickly into a borderless network and today we take it for granted that the Internet is unequivocally global. This effect has been maximised by our ability to communicate whilst on the move. Mobile communications have penetrated our lives at an even greater speed and in a more significant way than the Internet itself.

This trend has led visionaries like Google’s Eric Schmidt to affirm that thanks to mobile technology, the amount of digitally connected people will more than triple – going from the current 2 billion to 7 billion people – very soon. That is more than three times the amount of data generated today. Similarly, the global leader in professional networking, LinkedIn, which has just celebrated its 10th anniversary, is banking on mobile communications as one of the pillars for achieving its mission of connecting the world’s professionals.

As a result, everyone is global – every business, every consumer and every citizen. One of the realities of this situation has been exposed by the recent PRISM revelations, which highlight very clearly the global availability of digital communications data. Perversely, the news about the NSA programme is set to have a direct impact on the current and forthcoming legislative restrictions on international data flows, which is precisely one of the factors disrupting the globalisation of data. In fact, PRISM is already being referred to as a key justification for a tight EU data protection framework and strong jurisdictional limitations on data exports, no matter how non-sensical those limitations may otherwise be.

The public policy and regulatory consequences of the PRISM affair for international data flows are pretty predictable. Future ‘adequacy findings’ by the European Commission as well as Safe Harbor will be negatively affected. We can assume that if the European Commission decides to have a go at seeking a re-negotiation of Safe Harbor, this will be cited as a justification. Things will not end there. Both contractual safeguards and binding corporate rules will be expected to address possible conflicts of law involving data requests for law enforcement or national security reasons in a way that no blanket disclosures are allowed. And of course, the derogations from the prohibition on international data transfers will be narrowly interpreted, particularly when they refer to transfers that are necessary on grounds of public interest.

The conflicting realities of data globalisation could not be more striking. On the one hand, every day practice shows that data is geographically neutral and simply flows across global networks to make itself available to those with access to it. On the other, it is going to take a fair amount of convincing to show that any restrictions on international data flows should be both measured and realistic. To address these conflicting realities we must therefore acknowledge the global nature of the web and Internet communications, the borderless fluidity of the mobile ecosystem and our human ability to embrace the most ambitious innovations and make them ordinary. So since we cannot stop the technological evolution of our time and the increasing value of data, perhaps it is time to accept that regulating data flows should not be about putting up barriers but about applying globally recognised safeguards.

This article was first published in Data Protection Law & Policy in June 2013.

Big data means all data

Posted on April 19th, 2013 by



There is an awesomeness factor in the way data about our digital comings and goings is being captured nowadays.  That awesomeness is such that it cannot even be described in numbers.  In other words, the concept of big data is not about size but about reach.  In the same way that the ‘wow’ of today’s computer memory will turn into a ‘so what’ tomorrow, references to terabytes of data are meaningless to define the power and significance of big data.  The best way to understand big data is to see it as a collection of all possible digital data.  Absolutely all of it.  Some of it will be trivial and most of it will be insignificant in isolation, but when put together its significance becomes clearer – at least to those who have the vision and astuteness to make the most of it.

Take transactional data as a starting point.  One purchase by one person is meaningful up to a point – so if I buy a cookery book, the retailer may be able to infer that I either know someone who is interested in cooking or I am interested in cooking myself.  If many more people buy the same book, apart from suggesting that it may be a good idea to increase the stock of that book, the retailer as well as other interested parties – publishers, food producers, nutritionists – could derive some useful knowledge from those transactions.  If I then buy cooking ingredients, the price of those items alone will give a picture of my spending bracket.  As the number of transactions increases, the picture gets clearer and clearer.  Now multiply the process for every shopper, at every retailer and every transaction.  You automatically have an overwhelming amount of data about what people do with their money – how much they spend, on what, how often and so on.  Is that useful information?  It does not matter, it is simply massive and someone will certainly derive value from it.  

That’s just the purely transactional stuff.  Add information about at what time people turn on their mobile phones, switch on the hot water or check their e-mail, which means of transportation they use to go where and when they enter their workplaces – all easily recordable.  Include data about browsing habits, app usage and means of communication employed.  Then apply a bit of imagination and think about this kind of data gathering in an Internet of Things scenario, where offline everyday activities are electronically connected and digitally managed.  Now add social networking interactions, blogs, tweets, Internet searches and music downloads.  And for good measure, include some data from your GPS, hairdresser and medical appointments, online banking activities and energy company.  When does this stop?  It doesn’t.  It will just keep growing.  It’s big data and is happening now in every household, workplace, school, hospital, car, mobile device and website.

What has happened in an uncoordinated but consistent manner is that all those daily activities have become a massive source of information which someone, somewhere is starting to make use of.  Is this bad?  Not necessarily.  So far, we have seen pretty benign and very positive applications of big data – from correctly spelt Internet searches and useful shopping recommendations to helpful traffic-free driving directions and even predictions in the geographical spread of contagious diseases.  What is even better is that, data misuses aside, the potential of this hugemongous amount of information is as big as the imagination of those who can get their hands on it, which probably means that we have barely started to scratch the surface of it all.

Our understanding of the potential of big data will improve as we become more comfortable and familiar with its dimensions but even now, it is easy to see its economic and social value.  But with value comes responsibility.  Just as those who extract and transport oil must apply utmost care to the handling of such precious but hazardous material, those who amass and manipulate humanity’s valuable data must be responsible and accountable for their part.  It is not only fair but entirely right that the greater the potential, the greater the responsibility, and that anyone entrusted with our information should be accountable to us all.  It should not be up to us to figure out and manage what others are doing with our data.  Frankly, that is simply unachievable in a big data world.  But even if we cannot measure the size of big data, we must still find a way to apportion specific and realistic responsibilities for its exploitation.

 

This article was first published in Data Protection Law & Policy in April 2013.

Smart Meters – new data access and privacy rules for the energy sector

Posted on February 21st, 2013 by



The Department of Energy and Climate Change (DECC) carried out numerous studies and soundings in preparation for the rollout of smart energy meters to over 30 million UK homes between 2014 and 2019, but the most polemical press coverage was elicited by the consultation in Spring 2012 on the data access and privacy issues raised by the valuable energy consumption data (Consumption Data) generated by these new metering devices. Some newspapers cited warnings of “cyber attacks by foreign hackers” and “a spy in every home”, and there was much interest in the concerns highlighted in a report published in June by the European Data Protection Supervisor that the most granular real-time Consumption Data could reveal details such as the daily habits of household members or even tell burglars when a house was unoccupied.

The UK government’s response to this consultation, published on 12th December 2012, sheds considerable light on the data protection compliance measures that must be put in place by energy companies, network operators and others who access Consumption Data such as ‘switching’ websites and energy services suppliers. These requirements will apply alongside (and in addition to) those already set out in the Data Protection Act 1998. The measures will be implemented via amendments to the licence conditions adhered to by energy suppliers (enforced by Ofgem) and a new Smart Energy Code overseen by a dedicated Smart Energy Code Panel. A central information hub controlled by a body known as the Data and Communications Company (DCC) will enable remote access to Consumption Data for suppliers and third parties that have agreed to be bound by the Code.

Background: The aim of the UK government’s smart meters programme is to give consumers real-time information about their energy consumption in the hope that this will help to control costs and eliminate estimated energy bills, on top of the environmental and cost-saving side effects of the behavioural changes such information may encourage. In the long term, it is hoped that smart energy data will lead to fluctuating, real-time energy pricing, enabling consumers to see how expensive it will be to use gas or electricity at any given time of day.

Key rules: There are some key elements to the new framework which apply differently to energy suppliers (such as British Gas and EDF Energy), network operators (companies that own and lease the infrastructure for delivering gas and electricity to premises) and “third parties” such as switching websites and energy companies when they are not acting in the capacity as a supplier to the relevant household.

A crucial aspect of the rules that applies to all parties is the requirement to obtain explicit, opt-in consent before using Consumption Data for any marketing purposes. For other uses, third parties will always need opt-in consent to remotely access Consumption Data of any level of granularity, whereas in order to remotely access the most detailed level of Consumption Data (relating to a period of less than one day), energy suppliers will also be required to obtain opt-in consent.

From a consumer protection perspective, perhaps the most important safeguards introduced by the Stage 1 draft of the Smart Energy Code published in November 2012 are the requirements on third parties requesting Consumption Data from the DCC to:

(a)  take measures to verify that the relevant household member has solicited the services connected with the third party’s data request;

(b)  self certify that the necessary consent has been obtained; and

(c)   provide reminders to consumers about the Consumption Data being collected at appropriate, regular intervals.

Privacy Impact Assessments: In line with Privacy by Design principles promoted by data protection authorities globally, the UK government has developed its own Privacy Impact Assessment to assess and anticipate the potential privacy risks of the smart metering programme as a whole. The idea is that the government’s PIA will be an “umbrella document” and every data controller wishing to access Consumption Data is expected to carry out its own PIA before the new framework comes into force (likely to be this summer). The European Commission is also developing a template PIA for this purpose.

Apart from helping to identify risks to customers and potential company liabilities, PIAs are lauded by the UK Information Commissioner as the best way to protect brand reputation, shape communication strategies and avoid expensive “bolt-on” solutions.

Conclusions: Research carried out as part of the UK government’s Data Access and Privacy consultation showed that the overwhelming concern of consumers questioned was that smart meter data would lead to an increase in direct marketing communications. Many participants did not identify the potential for misuse of Consumption Data until it was explained to them. The less obvious nature of the potential for privacy intrusion of this new data underlines the fact that consent is not a panacea in the case of smart meters (despite the considerable focus on this in the consultation responses).

So, clear and comprehensive information is key. As part of preparing for compliance, companies planning to access Consumption Data should build clear messaging into all customer-facing procedures, including those in respect of all in-person, online and call centre interaction. And whilst some of the finer details of the new rules are yet to be ironed out, it’s clear that every organisation concerned will be expected to digest the details of the new framework now and be fully prepared – including by completing Privacy Impact Assessments – in time for when the regulatory framework comes into force, expected to be June 2013.

A longer version of this article was first published in Data Protection Law & Policy in February 2013.

 

Big Data at risk

Posted on February 1st, 2013 by



“The amount of data in our world has been exploding, and analysing large data sets — so-called Big Data — will become a key basis of competition, underpinning new waves of productivity growth, innovation and consumer surplus”.  Not my words, but those of the McKinsey Global Institute (the business and economics research arm of McKinsey) in a report that evidences like no other the value of data for future economic growth.  However, that value will be seriously at risk if the European Parliament accepts the proposal for a pan-European Regulation currently on the table.

Following the publication by the European Commission last year of a proposal for a General Data Protection Regulation aimed at replacing the current national data protection laws across the EU, at the beginning of 2013, Jan Philipp Albrecht (Rapporteur for the LIBE Committee, which is leading the European Parliament’s position on this matter) published his proposed revised draft Regulation.  

Albrecht’s proposal introduces a wide definition of ‘profiling’, which was covered by the Commission’s proposal but not defined.  Profiling is defined in Albrecht’s proposal as “any form of automated processing of personal data intended to evaluate certain personal aspects relating to a natural person or to analyse or predict in particular that natural person’s performance at work, economic situation, location, health, personal preferences, reliability or behaviour“. 

Neither the Commission’s original proposal nor Albrecht’s proposal define “automated processing”.  However, the case law of the European Court of Justice suggests that processing of personal data by automated means (or automated processing) should be understood by contrast with manual processing.   In other words, automated processing is processing carried out by using computers whilst manual processing is processing carried out manually or on paper.  Therefore, the logical conclusion is that the collection of information via the Internet or from transactional records and the placing of that information onto a database — which is the essence of Big Data — will constitute automated processing for the purposes of the definition of profiling in Albrecht’s proposal.

If we link to that the fact that, in a commercial context, all that data will typically be used first to analyse people’s technological comings and goings, and then to make decisions based on perceived preferences and expected behaviours, it is obvious that most activities involving Big Data will fall within the definition of profiling.

The legal threat is therefore very clear given that, under Albrecht’s proposal, any data processing activities that qualify as ‘profiling’ will be unlawful by default unless those are activities are:

*      necessary for entering into or performing a contract at the request of the individual – bearing in mind that “contractual necessity” is very strictly interpreted by the EU data protection authorities to the point that if the processing is not strictly necessary from the point of view of the individuals themselves, it will not be regarded as necessary;

*      expressly authorised by EU or Member State law – which means that a statutory provision has to specifically allow such activities; or

*      with the individual’s consent – which must be specific, informed, explicit and freely given, taking into account that under Albrecht’s proposal, consent is not valid where the data controller is in a dominant market position or where the provision of a service is made conditional on the permission to use someone’s data.

In addition, there is a blanket prohibition on profiling activities involving sensitive personal data, discriminatory activities or children data.

So the outlook is simple: either the European Parliament figures out how to regulate profiling activities in a more balanced way or Big Data will become No Data.

 

Killing the Internet

Posted on January 25th, 2013 by



The beginning of 2013 could not have been more dramatic for the future of European data protection.  After months of deliberations, veiled announcements and guarded statements, the rapporteur of the European Parliament’s committee responsible for taking forward the ongoing legislative reform has revealed his position loudly and clearly.  Jan Albrecht’s proposal is by no means the final say of the Parliament but it is an indication of where an MEP who has thought long and hard about what the new data protection law should look like stands.  The reactions have been equally loud.  The European Commission has calmly welcomed the proposal, whilst some Member States’ governments have expressed serious concerns about its potential impact on the information economy.  Amongst the stakeholders, the range of opinions vary quite considerably – Albrecht’s approach is praised by regulators whilst industry leaders have massive misgivings about it.  So who is right?  Is this proposal the only possible way of truly protecting our personal information or have the bolts been tightened too much?

There is nothing more appropriate than a dispassionate legal analysis of some key elements of Albrecht’s proposal to reveal the truth: if the current proposal were to become law today, many of the most popular and successful Internet services we use daily would become automatically unlawful.  In other words, there are some provisions in Albrecht’s draft proposal that when combined together would not only cripple the Internet as we know it, but they would stall one of the most promising building blocks of our economic prosperity, the management and exploitation of personal information.  Sensationalist?  Consider this:

*     Traditionally, European data protection law has required that in order to collect and use personal data at all, one has to meet a lawful ground for processing.  The European Commission had intended to carry on with this tradition but ensuring that the so-called ‘legitimate interests’ ground, which permits data uses that do not compromise the fundamental rights and freedoms of individuals, remained available.  Albrecht proposes to replace this balancing exercise with a list of what qualifies as a legitimate interest and a list of what doesn’t.  The combination of both lists have the effect of ruling out any data uses which involve either data analytics or simply the processing of large amounts of personal data, so the obvious outcome is that the application of the ‘legitimate interests’ ground to common data collection activities on the Internet is no longer possible.

*     Albrecht’s aim of relegating reliance on the ‘legitimate interests’ ground to very residual cases is due to the fact that he sees individual’s consent as the primary basis for all data uses.  However, the manner and circumstances under which consent may be obtained are strictly limited.  Consent is not valid if the recipient is in a dominant market position.  Consent for the use of data is not valid either if presented as a condition of the terms of a contract and the data is not strictly necessary for the provision of the relevant service.  All that means that if a service is offered for free to the consumer – like many of the most valuable things on the Internet – but the provider of that service is seeking to rely on the value of the information generated by the user to operate as a business, there will not be a lawful way for that information to be used.

*     To finish things off, Albrecht delivers a killing blow through the concept of ‘profiling’.  Defined as automated processing aimed at analysing things like preferences and behaviour, it covers what has become the pillar of e-commerce and is set to change the commercial practices of every single consumer-facing business going forward.  However, under Albrecht’s proposal, such practices are automatically banned and only permissible with the consent of the individual, which as shown above, is pretty much mission impossible.

The collective effect of these provisions is truly devastating.  This is not an exaggeration.  It is the outcome of a simple legal analysis of a proposal deliberately aimed at restricting activities seen as a risk to people.  The decision that needs to be made now is whether such a risk is real or perceived and, in any event, sufficiently great to merit curtailing the development of the most sophisticated and widely used means of communication ever invented. 

 
This article was first published in Data Protection Law & Policy in January 2013.

The anonymisation challenge

Posted on November 29th, 2012 by



For a while now, it has been suggested that one of the ways of tackling the risks to personal information, beyond protecting it, is to anonymise it.  That means to stop such information being personal data altogether.  The effect of anonymisation of personal data is quite radical – take personal data, perform some magic to it and that information is no longer personal data.  As a result, it becomes free from any protective constraints.  Simple.  People’s privacy is no longer threatened and users of that data can run wild with it.  Everybody wins.  However, as we happen to be living in the ‘big data society’, the problem is that with the amount of information we generate as individuals, what used to be pure statistical data is becoming so granular that the real value of that information is typically linked to each of the individuals from whom the information originates.  Is true anonymisation actually possible then?

The UK Information Commissioner believes that given the potential benefits of anonymisation, it is at least worthwhile having a go at it.  With that in mind, the ICO has produced a chunky code of practice aimed at showing how to manage privacy risks through anonymisation.  According to the code itself, this is the first attempt ever made by a data protection regulator to explain how to rely on anonymisation techniques to protect people’s privacy, which is quite telling about the regulators’ faith in anonymisation given that the concept is already mentioned in the 1995 European data protection directive.  Nevertheless, the ICO is relentless in its defence of anonymisation as a tool that can help society meet its information needs in a privacy-friendly way.

The ICO believes that the legal test of whether information qualifies as personal data or not allows anonymisation to be a realistic proposition.  The reason for that is that EU data protection law only kicks in when someone is identifiable taking into account all the means ‘likely reasonably’ to be used to identify the individual.  In other words and as the code puts it, the law is not framed in terms of the mere possibility of an individual being identified.  The definition of personal data is based on the likely identification of an individual.  Therefore, the ICO argues that although it may not be possible to determine with absolute certainty that no individual will ever be identified as a result of the disclosure of anonymous data, that does not mean that personal data has been disclosed.

One of the advantages of anonymisation is that technology itself can help make it even more effective.  As with other privacy-friendly manifestations of technology – such as encryption and anti-malware software – the practice of anonymising data is likely to evolve at the same speed as the chances of identification.  This is so because technological evolution is in itself neutral and anonymisation techniques can and should evolve as the uses of data become more sophisticated.  What is clear is that whilst some anonymisation techniques are weak because reintroducing personal identifiers is as easy as stripping them out, technology can also help bulletproof anonymised data.

What makes anonymisation less viable though is the fact that in reality there will always be a risk of identification of the individuals to whom the data relates.  So the question is how remote that risk must be for anonymisation to work.  The answer is that it depends on the level of identification that turns non-personal data into personal data.  If personal data and personally identifiable information were the same thing, it would be much easier to establish whether a given anonymisation process has been effective.  But they are not because personal data goes beyond being able to ‘name’ an individual.  Personal data is about being able to single out an individual so the concept of identification can cover many situations which make anonymisation genuinely challenging.

The ICO is optimistic about the benefits and the prospect of anonymisation.  In certain cases – mostly in the context of public sector data uses – it will clearly be possible to derive value from truly anonymised data.  In many other cases however, it is difficult to see how anonymisation in isolation will achieve its end, as data granularity will prevail in order to maximise the value of the information.  In those situations, the gap left by imperfect anonymisation will need to be filled in by a good and fair level of data protection and, in some other cases, by the principle of ‘privacy by default’.  But that’s a different kind of challenge.

 
This article was first published in Data Protection Law & Policy in November 2012.

What to do when you can’t delete data?

Posted on October 2nd, 2012 by



How many lawyers have written terms into data processing contracts along the following lines:  “Upon termination or expiry of this Agreement, the data processor shall delete any and all copies of the Personal Data in its possession or control“?

It’s a classic example of a legal clause that’s ever so easy to draft but, in this day and age, almost impossible to implement in practice.  In most data processing ecosystems, the reality is that there seldom exists just a single copy of our data; instead, our data is distributed, backed-up, and archived across multiple systems, drives and tapes, and often across different geographic locations.  Far from being a bad thing, data distribution, archival and back-up better preserves the availability and integrity of our records.  But the quid pro quo of greater data resilience is that commitments to comprehensively wipe every last trace of our data are simply unrealistic and unachievable.

Nevertheless, once data has fulfilled its purpose, deletion is seemingly what the law requires.  The fifth principle of the Data Protection Act 1998 (implementing Article 6(e) of Directive 95/46/EC) says that: “Personal data processed for any purpose or purposes shall not be kept for longer than is necessary for that purpose or those purposes.“  So how to reconcile this black and white approach to data deletion with the reality of modern day data processing systems?

Thankfully, the ICO has the answer, which it provides in a recently-published guidance note on “Deleting personal data” (available here).  The ICO starts off by acknowledging the difficulties outlined above, commenting that “In the days of paper records it was relatively easy to say whether information had been deleted or not, for example through incineration. The situation can be less certain with electronic storage, where information that has been ‘deleted’ may still exist, in some form or another, within an organisation’s systems.

The sensible answer it arrives at is to say that, if data cannot be deleted for technical or other reasons, then it should instead be put ‘beyond use’.   Putting data ‘beyond use’ has four components, namely:

  1. ensuring that the organisation will not and cannot use the personal data to inform any decision in respect of any individual or in a manner that affects the underlying individuals in any way;
  2. not giving any other organisation access to the personal data;
  3. at all times protecting the personal data with appropriate technical and organisational security; and
  4. committing to delete the personal data if or when this becomes possible.

Broadly speaking, you can condense the four components above into: “Delete it if you can and, if you can’t, make sure it’s stored securely and don’t let anyone use it”. Which is, of course, entirely sensible advice.

It does raise one interesting problem though:  what to do when the individual data subject requests access to his or her data that has been put beyond use?  Here, the ICO again takes a business-friendly view saying simply that “We will not require data controllers to grant individuals subject access to the personal data provided that all four safeguards above are in place.“  In other words, the business does not need to instigate extensive (and expensive) searches of records that have been put beyond use just because an individual requests access to his or her data – for the purposes of subject access, this inert data is treated as if it had been deleted.

But the ICO does issue a warning: “It is bad practice to give a user the impression that a deletion is absolute, when in fact it is not.” So the message to take away is this: make sure you do not commit yourself to data deletion standards that you know, in all likelihood, you can’t and won’t meet.   And, by the same token, don’t let your lawyers commit you to these either!

The future of privacy

Posted on May 31st, 2012 by



Not that long ago, reading this article (let along writing it) would have been regarded as nerdy.  Data protection used to be seen as arcane and irrelevant to businesses and ordinary people.  Introducing yourself as a data protection lawyer or a privacy professional was a recipe for embarrassment and a sure way of getting some funny looks.  However, at some point, something suddenly changed.  What was wacky is now cool, and what seemed like an obscure legal discipline with funny jargon and odd rules has become a critical consideration for business and government.  What happened?  What was the event that radically altered our perception of the importance of personal information for the world’s prosperity?  The crucial catalyst was in fact a combination of three factors that will also shape the future of privacy and data protection going forward.

The first one is the most obvious of all because it has impregnated our lives to such degree that we can no longer live without it.  Remember life before e-mail, mobile phones, the Internet, search engines, CCTV cameras, biometric passports, chip & pin, apps and cookies?  The evolution of technology has been the primary contributor to the growing importance of data protection as digitalisation has led to a never ending, yet not always visible, churn of personal data.  The second one has been the realisation that personal data is a very valuable asset.  Some examples: last year, Google’s turnover was nearly $38bn, LinkedIn doubled the value of its shares on the day it floated on the stock exchange, and Facebook’s IPO reportedly created 1,000 millionaires overnight.  What these businesses have in common in addition to being amazing success stories of the post-dotcom boom is that their success is based on the power and value of personal information.  The third critical factor is no other than the reality of data globalisation: the fact that geographical distance and cultural barriers have become almost negligible for the exploitation of data.

These three factors have thrown into the air many existing preconceptions and turned legal conundrums into business critical issues.  Getting the right answer to which law applies or who is in control of the information generated by our daily use of global interconnecting technologies has massive practical implications.  Some will be purely financial and others political, but their significance has not gone unnoticed.  Even the very thing at the centre of the legal debate – ascertaining what is and what isn’t personal data – has become an issue of great economic impact for businesses across all industry sectors, from technology to financial services and from retail to life sciences.  As an overarching theme, the question of how to ensure global compliance with maximum effectiveness and minimum cost has suddenly focused the minds of business leaders and politicians.

But having got to this place, the question that we now need to address is this: what happens next?  Or in other words: what is the future of privacy and data protection?  For policy makers and data reliant businesses alike the answer to that question lies in addressing the three issues that have so radically changed things.  Regulating and managing the evolution of technology necessarily involves understanding technology.  That means that a likely component of tomorrow’s privacy regulation will be about explaining technology in a way that their users can understand what is likely to happen to their personal information generated by the use of that technology.  This is transparency 2.0 and from a compliance perspective, collecting and using data will entail making the impenetrable world of new technologies understandable to everyone.  But beyond pure transparency, something that no legal regime has addressed to date but that will form part of the legal obligations of the future is the provision of value.  When a government or a business asks a citizen or customer for their personal information, it will only be fair to give that person something back or to share with individuals part of the value extracted from their data.  That would certainly be a much better way of getting the control balance right than seeking an empty and meaningless consent.

One remaining challenge is the international nature of data flows and information exploitation.  Data protection will never be a local issue again.  Data is no longer transferred from A to B.  Geographically speaking, where data actually is in an interconnected world is completely irrelevant, because data is ever accessible from everywhere.  Law and practice will have to come to terms with that.  Overcoming the legal limitations affecting international data transfers has always been a difficult challenge because, even in the old days, data was pretty fluid.  Today’s and tomorrow’s data globalisation needs a completely different approach which focuses on mutual recognition of rules, regulatory collaboration and incentives to do the right thing.

This article was first published in issue number 100 of Data Protection Law & Policy in May 2012.