Archive for the ‘Profiling’ Category

Information Pollution and the Internet of Things

Posted on September 8th, 2013 by



Kevin Ashton, the man credited with coining the term “The Internet of Things” once said: “The Internet of Things has the potential to change the world, just as the Internet did. Maybe even more so.

This couldn’t be more true. The range of potential applications for the Internet of Things, from consumer electronics to energy efficiency and from supply chain management to traffic safety, is breathtaking. Today, there are 6 billion or so connected devices on the planet. By 2020, some estimate that figure will be in the range of 30 to 50 billion. Applying some very basic maths, That’s between 4 and 7 internet-connected “things” per person.

All this, of course, means vast levels of automated data generation, processing and sharing. Forget Big Data: we’re talking mind-blowingly Huge Data. That presents numerous challenges to traditional notions of privacy, and issues of applicability of law, transparency, choice and security have been (and will continue to be) debated at length.

One area that deserves particular attention is how we deal with data access in an everything-connected world. There’s a general notion in privacy that individuals should have a right to access their information – indeed, this right is hard-coded into EU law. But when so much information is collected – and across so many devices – how can we provide individuals with meaningful access to information in a way that is not totally overwhelming?

Consider a world where your car, your thermostat, your DVR, your phone, your security system, your portable health device, and your fridge are all trying to communicate information to you on a 24 x 7 x 365 basis: “This road’s busy, take that one instead”, “Why not lower your temperature by two degrees”, “That program you recorded is ready to watch”, “You forgot to take your medication today” and so on.

The problem will be one of information pollution: there will be just too much information available. How do you stop individuals feeling completely overwhelmed by this? The truth is that no matter how much we, as a privacy community, try to preserve rights for individuals to access as much data as possible, most will never explore their data beyond a very cursory, superficial level. We simply don’t have the energy or time.

So how do we deal with this challenge? The answer is to abstract away from the detail of the data and make readily available to individuals only the information they want to see, when they want to see it. Very few people want a level of detail typically of interest only to IT forensics experts in complex fraud cases – like what IP addresses they used to access a service or the version number of the software on their device. They want, instead, to have access to information that holds meaning for them, presented in a real, tangible and easy to digest way. For want of a better descriptor, the information needs to be presented in a way that is “accessible”.

This means information innovation will be the next big thing: maybe we’ll see innovators create consumer-facing dashboards that collect, sift and simplify vast amounts of information across their many connected devices, perhaps using behavioural, geolocation and spatial profiling techniques to tell consumers the information that matters to them at that point in time.

And if this all sounds a little too far-fetched, then check out services like Google Now and TripIt, to name just a couple. Services are already emerging to address information pollution and we only have a mere 6 billion devices so far. Imagine what will happen with the next 30 billion or so!

A Brave New World Demands Brave New Thinking

Posted on June 3rd, 2013 by



Much has been said in the past few weeks and months about Google Glass, Google’s latest innovation that will see it shortly launch Internet-connected glasses with a small computer display in the corner of one lens that is visible to, and voice-controlled by, the wearer. The proposed launch capabilities of the device itself are—in pure computing terms—actually relatively modest: the ability to search the web, bring up maps, take photographs and video and share to social media.

So far, so iPhone.

But, because users wear and interact with Google Glass wherever they go, they will have a depth of relationship with their device that far exceeds any previous relationship between man and computer. Then throw in the likely short- to mid-term evolution of the device—augmented reality, facial recognition—and it becomes easy to see why Google Glass is so widely heralded as The Next Big Thing.

Of course, with an always-on, always-worn and always-connected, photo-snapping, video-recording, social media-sharing device, the privacy issues are a-plenty, ranging from the potential for crowd-sourced law enforcement surveillance to the more mundane forgetting-to-remove-Google-Glass-when-visiting-the-men’s-room scenario. These concerns have seen a very heated debate play out across the press, on TV and, of course, on blogs and social media.

But to focus the privacy debate just on Google Glass really misses the point. Google Glass is the headline-grabber, but in reality it’s just the tip of the iceberg when it comes to the wearable computing products that will increasingly be hitting the market over the coming years. Pens, watches, glasses (Baidu is launching its own smart glasses too), shoes, whatever else you care to think of—will soon all be Internet-connected. And it doesn’t stop at wearable computing either; think about Internet-connected home appliances: We can already get Internet-connected TVs, game consoles, radios, alarm clocks, energy meters, coffee machines, home safety cameras, baby alarms and cars. Follow this trend and, pretty soon, every home appliance and personal accessory will be Internet-connected.

All of these connected devices—this “Internet of Things”—collect an enormous volume of information about us, and in general, as consumers we want them: They simplify, organize and enhance our lives. But, as a privacy community, our instinct is to recoil at the idea of a growing pool of networked devices that collect more and more information about us, even if their purpose is ultimately to provide services we want.

The consequence of this tends to be a knee-jerk insistence on ever-strengthened consent requirements and standards: Surely the only way we can justify such a vast collection of personal information, used to build incredibly intricate profiles of our interests, relationships and behaviors, is to predicate collection on our explicit consent. That has to be right, doesn’t it?

The short answer to this is “no”—though not, as you might think, for the traditionally given reasons that users don’t like consent pop-ups or that difficulties arise when users refuse, condition or withdraw their consents. 

Instead, it’s simply that explicit consent is lazy. Sure, in some circumstances it may be warranted, but to look to explicit consent as some kind of data collection panacea will drive poor compliance that delivers little real protection for individuals.

Why? 

Because when you build compliance around explicit consent notices, it’s inevitable that those notices will become longer, all-inclusive, heavily caveated and designed to guard against risk. Consent notices become seen as a legal issue, not a design issue, inhibiting the adoption of Privacy by Design development so that—rather than enhancing user transparency, they have the opposite effect. Instead, designers build products with little thought to privacy, safe in the knowledge that they can simply ‘bolt on’ a detailed consent notice as a ‘take it or leave it’ proposition on installation or first use, just like terms of service are now. And, as technology becomes ever more complicated, so it becomes ever more likely that consumers won’t really understand what it is they’re consenting to anyway, no matter how well it’s explained. It’s also a safe bet that users will simply ignore any notice that stands between them and the service they want to receive. If you don’t believe me, then look at cookie consent as a case in point.

Instead, it’s incumbent upon us as privacy professionals to think up a better solution. One that strikes a balance between the legitimate expectations of the individual with regard to his or her privacy and the legitimate interests of the business with regard to its need to collect and use data. One that enables the business to deliver innovative new products and services to consumers in a way that demonstrates respect for their data and engenders their trust and which does not result in lazy, consent-driven compliance. One that encourages controllers to build privacy functionality into their products from the very outset, not address it as an afterthought.

Maybe what we need is a concept of an online “personal space.”

In the physical world, whether through the rules of social etiquette, an individual’s body language or some other indicator, we implicitly understand that there is an invisible boundary we must respect when standing in close physical proximity to another person. A similar concept could be conceived for the online world—ironically, Big Data profiles could help here. Or maybe it’s as simple as promoting a concept of “surprise minimization” as proposed by the California attorney general in her guidance on mobile privacy—the concept that, through Privacy by Design methodologies, you avoid surprising individuals by collecting data from or about them that, in the given context, they would not expect or want.

Whatever the solution is, we’re entering a brave new world; it demands some brave new thinking.

This post first published on the IAPP Privacy Perspectives here.

Profiling at the centre of the debate (again)

Posted on May 30th, 2013 by



Whilst the European Parliament and the Council of the EU sharpen their positions on the EU data protection reform, the Article 29 Working Party continues with its visible involvement in the process. This time the Working Party has adopted an advisory paper taking a firm view on the issue of profiling.

The Working Party appears to sit somewhere in the middle between the Commission’s proposal and Albrecht’s approach. That is still a very strict position to adopt, clearly aimed at eliminating the perceived risks of profiling (although such risks are not identified in the paper).

On the one hand, the Working Party’s advice takes a more severe approach than the Regulation by extending the regime to the “collection” of data for the purposes of profiling. On the other hand, it is less draconian than Albrecht by not applying the regime unless profiling “significantly affects” individuals.

Aside from figuring out what “significantly affects” may mean, which could have academics, lawyers and regulators debating it for life, the most challenging aspect of the Working Party’s advice is their call for explicit consent and data minimisation. These would be real practical challenges given the omnipresent and evolving nature of profiling and I wonder whether they are fully justifiable from a public policy perspective.

In order to answer that question, it is crucial to pin down what the risks of profiling are. As with so many other privacy-related topics, profiling as an activity seems to have a rather emotional slant to it – mainly negative. That is an issue because regulatory decisions should be free from that kind of interference. Therefore, it would be wise to take advantage of the year or so that remains before the draft Regulation becomes law to get this matter right, so that real risks are properly tackled whilst the value of data – not just commercial, but societal as well – is preserved and maximised.

Implied consent getting ever closer in the Netherlands

Posted on May 25th, 2013 by



On 20 May 2013, Dutch Minister Kamp (Minister for Economic Affairs) presented a bill to amend Article 11.7a of the Dutch Telecommunications Act (‘the cookie law’). Once it passes into law the bill will, among other things, allow website operators to rely on visitors’ implied consent to serve cookies and will also exempt analytics cookies from the consent requirement.

Why these changes are needed

In February this year the Dutch government concluded that the cookie law had overshot its intended objective. The current cookie law require website owners to obtain visitors’ opt-in consent to virtually all types of cookies, except those which are strictly necessary. This led to widespread adoption of opt-in consent barriers and pop-up screens which, the Government accepts, is undesirable from both a consumer and business standpoint.

The Government believes the problem with the current law is that it applies equally to all cookies, even those with little privacy impact. Because of this, it proposes that the scope of the consent exemptions should expand to include more types of cookies.

New exemptions: analytics cookies, affiliate cookies and a/b-testing cookies

Currently, a website operator does not have to obtain consent if cookies are strictly necessary to provide a visitor-requested service. Once the bill enters into effect, a further category of cookies will be exempted from the consent requirement – those which are “absolutely necessary […] to obtain information about the quality and effectiveness of an information society service provided  – provided that this has no or little consequences for the privacy of the user.

First-party and third-party analytics cookies, affiliate referral cookies and a/b testing cookies all seem likely to fall within the scope of this new exemption.  However, to ensure that these cookies qualify as having “no or little consequences for the privacy of the user”:

  • the data collected by these cookies must not be used to make a profile of the visitor (e.g. for targeting purposes); and
  • if the website operator shares cookie data with a third party (e.g. an analytics service provider), it must conclude an agreement with the third party that either requires the third party not to use the data for its own purposes or, alternatively, only for defined purposes that have no or little effect on visitors’ privacy.

Implied Consent

For other types of cookies (in particular, targeted advertising cookies), the consent requirements of the cookie law apply in full.  However, the explanatory memorandum to the bill discusses the interpretation of ‘consent’ in great detail and advocates the legal validity of implied consent solutions.

In particular, it advocates that implied consent may be legally derived from the behavior of the visitor of a website – for example, in the case where a visitor is presented with a clear notice about the website’s use of cookies and given options to control those cookies but continues to browse the website.  This is at odds with previous regulatory opinions of the ACM (formerly the OPTA, the relevant regulator for these purposes) which said that implied consent would not constitute valid consent.

Although Dutch recognition of implied consent has been anticipated for a while (see here), this is a critical development for online businesses in the Netherlands.  Once the bill enters into force, website operators will be able to replace their current explicit consent barriers and pop-ups with more user-friendly implied consent banners indicating that continued use of the website without changing cookie settings will constitute consent.

All in all, the bill is a major step towards a more pragmatic implementation of the cookie law. With these changes, Dutch law will better balance the privacy interests of website visitors with online businesses’ legitimate data collection activities.

When will the bill enter into force?

The bill is open for public consultation until 1 July 2013, and the Minister must also consult the Council of State and the Dutch Data Protection Authority. On the basis of the consultation responses, the minister may then decide to amend the bill or submit it to Parliament as currently drafted. Parliamentary discussion can be completed within a few months, but may potentially take up to a year. However, given the current momentum behind adopting a more pragmatic cookie regime in the Netherlands, it is anticipated that the overall process will be toward the shorter end of this timescale.

With thanks to our friends Nicole Wolters Ruckert and Maarten Goudsmit, Privacy Attorneys at Kennedy Van der Laan, for this update. 

 

Cookie consent update – implied consent now widespread

Posted on May 15th, 2013 by



Our latest EU cookie consent tracking table has just been published here.

Latest regional developments:

Our latest table reveals:

* ‘Implied consent’ is currently a valid solution for cookie compliance in nearly three-quarters of EEA Member States.

* Since our last update, cookie consent implementations have been introduced in Norway and Poland.

* Ongoing cookie regulatory developments in Denmark, the Netherlands, Slovenia and Spain.

Other notable developments

Aside from the regional developments shown in our table, other notable developments include:

* Growing recognition that cookie consent is every bit as relevant in mobile platforms as in desktop platforms – see, for example, the Working Party’s latest opinion on mobile apps (here).

* Major online players like Facebook and Google are adopting notice and choice solutions, likely driving wider industry compliance efforts (see here).

* Consumer protection and advertising regulatory bodies like the OFT and ASA are increasingly showing interest in online tracking and notice/choice issues (see here and here).

* Increasing co-operation between global DPAs on online privacy compliance issues (see here).

All in all, online privacy compliance continues to attract ever greater attention, both within data protection circles and from the wider regulatory environment.  As this issue continue to run and run, the picture emerging is that implied consent is the clear compliance front-runner – both from a regulatory and also from a market-adoption perspective.

Big data means all data

Posted on April 19th, 2013 by



There is an awesomeness factor in the way data about our digital comings and goings is being captured nowadays.  That awesomeness is such that it cannot even be described in numbers.  In other words, the concept of big data is not about size but about reach.  In the same way that the ‘wow’ of today’s computer memory will turn into a ‘so what’ tomorrow, references to terabytes of data are meaningless to define the power and significance of big data.  The best way to understand big data is to see it as a collection of all possible digital data.  Absolutely all of it.  Some of it will be trivial and most of it will be insignificant in isolation, but when put together its significance becomes clearer – at least to those who have the vision and astuteness to make the most of it.

Take transactional data as a starting point.  One purchase by one person is meaningful up to a point – so if I buy a cookery book, the retailer may be able to infer that I either know someone who is interested in cooking or I am interested in cooking myself.  If many more people buy the same book, apart from suggesting that it may be a good idea to increase the stock of that book, the retailer as well as other interested parties – publishers, food producers, nutritionists – could derive some useful knowledge from those transactions.  If I then buy cooking ingredients, the price of those items alone will give a picture of my spending bracket.  As the number of transactions increases, the picture gets clearer and clearer.  Now multiply the process for every shopper, at every retailer and every transaction.  You automatically have an overwhelming amount of data about what people do with their money – how much they spend, on what, how often and so on.  Is that useful information?  It does not matter, it is simply massive and someone will certainly derive value from it.  

That’s just the purely transactional stuff.  Add information about at what time people turn on their mobile phones, switch on the hot water or check their e-mail, which means of transportation they use to go where and when they enter their workplaces – all easily recordable.  Include data about browsing habits, app usage and means of communication employed.  Then apply a bit of imagination and think about this kind of data gathering in an Internet of Things scenario, where offline everyday activities are electronically connected and digitally managed.  Now add social networking interactions, blogs, tweets, Internet searches and music downloads.  And for good measure, include some data from your GPS, hairdresser and medical appointments, online banking activities and energy company.  When does this stop?  It doesn’t.  It will just keep growing.  It’s big data and is happening now in every household, workplace, school, hospital, car, mobile device and website.

What has happened in an uncoordinated but consistent manner is that all those daily activities have become a massive source of information which someone, somewhere is starting to make use of.  Is this bad?  Not necessarily.  So far, we have seen pretty benign and very positive applications of big data – from correctly spelt Internet searches and useful shopping recommendations to helpful traffic-free driving directions and even predictions in the geographical spread of contagious diseases.  What is even better is that, data misuses aside, the potential of this hugemongous amount of information is as big as the imagination of those who can get their hands on it, which probably means that we have barely started to scratch the surface of it all.

Our understanding of the potential of big data will improve as we become more comfortable and familiar with its dimensions but even now, it is easy to see its economic and social value.  But with value comes responsibility.  Just as those who extract and transport oil must apply utmost care to the handling of such precious but hazardous material, those who amass and manipulate humanity’s valuable data must be responsible and accountable for their part.  It is not only fair but entirely right that the greater the potential, the greater the responsibility, and that anyone entrusted with our information should be accountable to us all.  It should not be up to us to figure out and manage what others are doing with our data.  Frankly, that is simply unachievable in a big data world.  But even if we cannot measure the size of big data, we must still find a way to apportion specific and realistic responsibilities for its exploitation.

 

This article was first published in Data Protection Law & Policy in April 2013.

If Google cares about cookie consent, so should you.

Posted on April 16th, 2013 by



Over the weekend, Google made a subtle – but significant – modification to its online search service in the EU: nearly two years after Europe’s deadline for EU Member States to adopt national cookie consent laws, Google rolled out a cookie consent banner on its EU search sites.

If you’re a visitor from the US, you may have missed it: the banner shows only if you visit Google sites from within the EU. However, EU visitors will clearly see Google’s consent banner placed at the bottom of its main search page and at the top of subsequent search results. As well as informing visitors that “By using our services, you agree to our use of cookies“, the banner provides a “Learn more” link that visitors can click on to watch a video about Google’s cookie use and to see disclosures about the cookies it serves.

This development alone would be significant. But taken together with Facebook’s recent announcement it will deploy the AdChoices icon (another implied consent solution for targeted adverts) on ads served through its FBX exchange, the implications become huge for the following reasons:

* CPOs will find selling cookie consent adoption much easier now. Selling the need to implement cookie consent to the business has always been a challenge. The thinking among marketing, analytics and web operations teams has always been that cookie consent is expensive to implement, time consuming to maintain, and disruptive to the user experience and data collection practices. Other than the occasional penned letter by regulators there’s been no “real” enforcement to date and, with patchy market adoption of cookie consent, many businesses have performed a simple cost / benefit analysis and chosen inaction over compliance. But when two of the Internet’s most heavily scrutinised businesses actively engage with cookie consent, they clearly think it’s an issue worth caring about – and that means it’s an issue YOU need to care about too. The “Google does it” argument is a powerful tool to persuade the business it needs to re-think its strategy and adopt a cookie consent solution.

* Regulatory enforcement just got easier. Rightly or wrongly, a perceived challenge for regulators wanting to enforce non-compliance has been that, before taking measures against the general publisher and advertiser population, they need first to address the behaviours of the major Internet players. While never overtly acknowledged, the underlying concern has been that any business pursued for not adopting a cookie banner would cry “What about them?”, immediately presenting regulators with a challenge: do they continue to pursue that business and risk public criticism for overlooking the bigger fish, or do they pursue the bigger fish and risk getting drawn into expensive, resource-draining legal battles with them? The result to date has been regulatory stalemate, but these developments could unlock this perceived barrier. While it’s not the case that they will result in a sudden flurry of enforcement activity overnight, they are one of many factors that could start to tip the scales towards some form of meaningful enforcement in future.

* Implied consent IS the accepted market standard. When the cookie consent law was first proposed, there were huge concerns that we would be set upon by an avalanche of consent pop-up windows every time we logged online. Whizz forward a few years, and thankfully this hasn’t happened, whatever regulatory preferences may exist for cookie opt-ins. Instead, over time, we’ve seen Member States and – perhaps more importantly – the market grow more and more accepting of implied consent solutions. Adoption by major players like Facebook and Google lend significant credibility to implied consent and smaller businesses will undoubtedly turn to the approaches used by these major players when seeking their own compliance inspiration. Implied consent has become the de facto market standard and seems set to remain that way for the foreseeable future. Businesses delaying compliance adoption due to concerns about the evolution of consent requirements in the EU now have the certainty they need to act.

This post first appeared in the IAPP’s Privacy Perspectives blog, available here.

Europe continues to embrace cookie consent

Posted on February 5th, 2013 by



We’ve just published an updated table of European cookie consent requirements (available here), which makes clear that Member State adoption of local cookie consent laws continues to spread.

Our latest update reveals that:

*  24 out of 30 EEA Member States have now adopted national cookie consent rules.

*  Since our last update, Poland, Portugal and Slovenia have adopted new local laws governing cookie consent.

*  There are ongoing regulatory developments with regard to cookie consent guidance and enforcement in Denmark, Italy, Ireland and the UK.

With cookie consent rules have now been adopted across nearly all European territories, online businesses operating without a notice and consent strategy face real exposure that they need to address and resolve promptly.  And given the recent news of the first ever group privacy claim in the UK relating to cookies, non-compliance risk is rising from “simmering” to “boiling”!

Big Data at risk

Posted on February 1st, 2013 by



“The amount of data in our world has been exploding, and analysing large data sets — so-called Big Data — will become a key basis of competition, underpinning new waves of productivity growth, innovation and consumer surplus”.  Not my words, but those of the McKinsey Global Institute (the business and economics research arm of McKinsey) in a report that evidences like no other the value of data for future economic growth.  However, that value will be seriously at risk if the European Parliament accepts the proposal for a pan-European Regulation currently on the table.

Following the publication by the European Commission last year of a proposal for a General Data Protection Regulation aimed at replacing the current national data protection laws across the EU, at the beginning of 2013, Jan Philipp Albrecht (Rapporteur for the LIBE Committee, which is leading the European Parliament’s position on this matter) published his proposed revised draft Regulation.  

Albrecht’s proposal introduces a wide definition of ‘profiling’, which was covered by the Commission’s proposal but not defined.  Profiling is defined in Albrecht’s proposal as “any form of automated processing of personal data intended to evaluate certain personal aspects relating to a natural person or to analyse or predict in particular that natural person’s performance at work, economic situation, location, health, personal preferences, reliability or behaviour“. 

Neither the Commission’s original proposal nor Albrecht’s proposal define “automated processing”.  However, the case law of the European Court of Justice suggests that processing of personal data by automated means (or automated processing) should be understood by contrast with manual processing.   In other words, automated processing is processing carried out by using computers whilst manual processing is processing carried out manually or on paper.  Therefore, the logical conclusion is that the collection of information via the Internet or from transactional records and the placing of that information onto a database — which is the essence of Big Data — will constitute automated processing for the purposes of the definition of profiling in Albrecht’s proposal.

If we link to that the fact that, in a commercial context, all that data will typically be used first to analyse people’s technological comings and goings, and then to make decisions based on perceived preferences and expected behaviours, it is obvious that most activities involving Big Data will fall within the definition of profiling.

The legal threat is therefore very clear given that, under Albrecht’s proposal, any data processing activities that qualify as ‘profiling’ will be unlawful by default unless those are activities are:

*      necessary for entering into or performing a contract at the request of the individual – bearing in mind that “contractual necessity” is very strictly interpreted by the EU data protection authorities to the point that if the processing is not strictly necessary from the point of view of the individuals themselves, it will not be regarded as necessary;

*      expressly authorised by EU or Member State law – which means that a statutory provision has to specifically allow such activities; or

*      with the individual’s consent – which must be specific, informed, explicit and freely given, taking into account that under Albrecht’s proposal, consent is not valid where the data controller is in a dominant market position or where the provision of a service is made conditional on the permission to use someone’s data.

In addition, there is a blanket prohibition on profiling activities involving sensitive personal data, discriminatory activities or children data.

So the outlook is simple: either the European Parliament figures out how to regulate profiling activities in a more balanced way or Big Data will become No Data.

 

Killing the Internet

Posted on January 25th, 2013 by



The beginning of 2013 could not have been more dramatic for the future of European data protection.  After months of deliberations, veiled announcements and guarded statements, the rapporteur of the European Parliament’s committee responsible for taking forward the ongoing legislative reform has revealed his position loudly and clearly.  Jan Albrecht’s proposal is by no means the final say of the Parliament but it is an indication of where an MEP who has thought long and hard about what the new data protection law should look like stands.  The reactions have been equally loud.  The European Commission has calmly welcomed the proposal, whilst some Member States’ governments have expressed serious concerns about its potential impact on the information economy.  Amongst the stakeholders, the range of opinions vary quite considerably – Albrecht’s approach is praised by regulators whilst industry leaders have massive misgivings about it.  So who is right?  Is this proposal the only possible way of truly protecting our personal information or have the bolts been tightened too much?

There is nothing more appropriate than a dispassionate legal analysis of some key elements of Albrecht’s proposal to reveal the truth: if the current proposal were to become law today, many of the most popular and successful Internet services we use daily would become automatically unlawful.  In other words, there are some provisions in Albrecht’s draft proposal that when combined together would not only cripple the Internet as we know it, but they would stall one of the most promising building blocks of our economic prosperity, the management and exploitation of personal information.  Sensationalist?  Consider this:

*     Traditionally, European data protection law has required that in order to collect and use personal data at all, one has to meet a lawful ground for processing.  The European Commission had intended to carry on with this tradition but ensuring that the so-called ‘legitimate interests’ ground, which permits data uses that do not compromise the fundamental rights and freedoms of individuals, remained available.  Albrecht proposes to replace this balancing exercise with a list of what qualifies as a legitimate interest and a list of what doesn’t.  The combination of both lists have the effect of ruling out any data uses which involve either data analytics or simply the processing of large amounts of personal data, so the obvious outcome is that the application of the ‘legitimate interests’ ground to common data collection activities on the Internet is no longer possible.

*     Albrecht’s aim of relegating reliance on the ‘legitimate interests’ ground to very residual cases is due to the fact that he sees individual’s consent as the primary basis for all data uses.  However, the manner and circumstances under which consent may be obtained are strictly limited.  Consent is not valid if the recipient is in a dominant market position.  Consent for the use of data is not valid either if presented as a condition of the terms of a contract and the data is not strictly necessary for the provision of the relevant service.  All that means that if a service is offered for free to the consumer – like many of the most valuable things on the Internet – but the provider of that service is seeking to rely on the value of the information generated by the user to operate as a business, there will not be a lawful way for that information to be used.

*     To finish things off, Albrecht delivers a killing blow through the concept of ‘profiling’.  Defined as automated processing aimed at analysing things like preferences and behaviour, it covers what has become the pillar of e-commerce and is set to change the commercial practices of every single consumer-facing business going forward.  However, under Albrecht’s proposal, such practices are automatically banned and only permissible with the consent of the individual, which as shown above, is pretty much mission impossible.

The collective effect of these provisions is truly devastating.  This is not an exaggeration.  It is the outcome of a simple legal analysis of a proposal deliberately aimed at restricting activities seen as a risk to people.  The decision that needs to be made now is whether such a risk is real or perceived and, in any event, sufficiently great to merit curtailing the development of the most sophisticated and widely used means of communication ever invented. 

 
This article was first published in Data Protection Law & Policy in January 2013.