Archive for the ‘Smartphones’ Category

A Brave New World Demands Brave New Thinking

Posted on June 3rd, 2013 by



Much has been said in the past few weeks and months about Google Glass, Google’s latest innovation that will see it shortly launch Internet-connected glasses with a small computer display in the corner of one lens that is visible to, and voice-controlled by, the wearer. The proposed launch capabilities of the device itself are—in pure computing terms—actually relatively modest: the ability to search the web, bring up maps, take photographs and video and share to social media.

So far, so iPhone.

But, because users wear and interact with Google Glass wherever they go, they will have a depth of relationship with their device that far exceeds any previous relationship between man and computer. Then throw in the likely short- to mid-term evolution of the device—augmented reality, facial recognition—and it becomes easy to see why Google Glass is so widely heralded as The Next Big Thing.

Of course, with an always-on, always-worn and always-connected, photo-snapping, video-recording, social media-sharing device, the privacy issues are a-plenty, ranging from the potential for crowd-sourced law enforcement surveillance to the more mundane forgetting-to-remove-Google-Glass-when-visiting-the-men’s-room scenario. These concerns have seen a very heated debate play out across the press, on TV and, of course, on blogs and social media.

But to focus the privacy debate just on Google Glass really misses the point. Google Glass is the headline-grabber, but in reality it’s just the tip of the iceberg when it comes to the wearable computing products that will increasingly be hitting the market over the coming years. Pens, watches, glasses (Baidu is launching its own smart glasses too), shoes, whatever else you care to think of—will soon all be Internet-connected. And it doesn’t stop at wearable computing either; think about Internet-connected home appliances: We can already get Internet-connected TVs, game consoles, radios, alarm clocks, energy meters, coffee machines, home safety cameras, baby alarms and cars. Follow this trend and, pretty soon, every home appliance and personal accessory will be Internet-connected.

All of these connected devices—this “Internet of Things”—collect an enormous volume of information about us, and in general, as consumers we want them: They simplify, organize and enhance our lives. But, as a privacy community, our instinct is to recoil at the idea of a growing pool of networked devices that collect more and more information about us, even if their purpose is ultimately to provide services we want.

The consequence of this tends to be a knee-jerk insistence on ever-strengthened consent requirements and standards: Surely the only way we can justify such a vast collection of personal information, used to build incredibly intricate profiles of our interests, relationships and behaviors, is to predicate collection on our explicit consent. That has to be right, doesn’t it?

The short answer to this is “no”—though not, as you might think, for the traditionally given reasons that users don’t like consent pop-ups or that difficulties arise when users refuse, condition or withdraw their consents. 

Instead, it’s simply that explicit consent is lazy. Sure, in some circumstances it may be warranted, but to look to explicit consent as some kind of data collection panacea will drive poor compliance that delivers little real protection for individuals.

Why? 

Because when you build compliance around explicit consent notices, it’s inevitable that those notices will become longer, all-inclusive, heavily caveated and designed to guard against risk. Consent notices become seen as a legal issue, not a design issue, inhibiting the adoption of Privacy by Design development so that—rather than enhancing user transparency, they have the opposite effect. Instead, designers build products with little thought to privacy, safe in the knowledge that they can simply ‘bolt on’ a detailed consent notice as a ‘take it or leave it’ proposition on installation or first use, just like terms of service are now. And, as technology becomes ever more complicated, so it becomes ever more likely that consumers won’t really understand what it is they’re consenting to anyway, no matter how well it’s explained. It’s also a safe bet that users will simply ignore any notice that stands between them and the service they want to receive. If you don’t believe me, then look at cookie consent as a case in point.

Instead, it’s incumbent upon us as privacy professionals to think up a better solution. One that strikes a balance between the legitimate expectations of the individual with regard to his or her privacy and the legitimate interests of the business with regard to its need to collect and use data. One that enables the business to deliver innovative new products and services to consumers in a way that demonstrates respect for their data and engenders their trust and which does not result in lazy, consent-driven compliance. One that encourages controllers to build privacy functionality into their products from the very outset, not address it as an afterthought.

Maybe what we need is a concept of an online “personal space.”

In the physical world, whether through the rules of social etiquette, an individual’s body language or some other indicator, we implicitly understand that there is an invisible boundary we must respect when standing in close physical proximity to another person. A similar concept could be conceived for the online world—ironically, Big Data profiles could help here. Or maybe it’s as simple as promoting a concept of “surprise minimization” as proposed by the California attorney general in her guidance on mobile privacy—the concept that, through Privacy by Design methodologies, you avoid surprising individuals by collecting data from or about them that, in the given context, they would not expect or want.

Whatever the solution is, we’re entering a brave new world; it demands some brave new thinking.

This post first published on the IAPP Privacy Perspectives here.

Privacy pointers for appreneurs

Posted on May 31st, 2013 by



While parts of the global economy are continuing to suffer serious economic shocks, an individual with a computer, internet access and the necessary know-how can join the increasing ranks of the appreneurs – people developing and hoping to make money from apps. Buoyed by the stories of wunderkids such as 17 year old Nick D’Aloisio who sold his Summly app to Yahoo for around £18m earlier this year, many are seeking to become appillionaires! And undoubtedly a rosy future will beckon for those fortunate enough to hit on the right app at the right time.

As the popularity of mobile and tablet devices rises, the proliferation of apps will continue. But some apps will sink without a trace and some will become global hits. Amidst all the excitement, those developing apps would do well to consider certain essential privacy pointers in order to anticipate any potential obstacles to widespread adoption and in order to avoid any unwelcome regulator attention down the road. These include:

1. Think Privacy from the beginning – design your app so that it shows an understanding of privacy issues from the start i.e. include settings that give an individual control over what data you collect about them, usually through providing an opt-out;

2. Tell individuals what you’re doing – include a notice setting out how you use their data, make sure that the notice is accessible and in a language that people can understand, and adopt a ‘surprise minimisation’ approach so that you can reasonably argue that individuals would not be surprised by the data you collect on them in a given context;

3. Decide whether you’re sharing the data you collect with anyone else – if so, make sure that there’s a good reason to share the data, tell individuals about the data sharing and check to see whether there are any rules that require you to obtain individuals’ consent before sharing their data i.e. for marketing purposes;

4. Check to see whether you’re collecting special types of data – be aware that certain types of data (such as location data or health data) are considered more intrusive and you may need to obtain an individual’s consent before collecting this data;

5. Implement an implied consent solution when using cookies or other tracking technologies in the EU - the debate is pretty much over on how to comply with the EU cookie rule since implied consent is increasingly being adopted by regulators (see Phil Lee’s recent blog)

While an initiative scrutinising App privacy policies and practices (similar to the ‘Internet Sweep Day’ we have seen initiated recently by the Global Privacy Enforcement Network) is probably some time off, appreneurs that can get privacy ‘right’ from the start will have a competitive advantage over those that do not.

Is BYOD secure for your company?

Posted on May 24th, 2013 by



Over the past years, BYOD has developed rapidely and has even become common practise within some companies. More and more employees are using their electronic devices (e.g., smartphones and tablets) at work. The benefits for companies are undisputable in terms of cost-saving, work productivity, and the functionalities that smart devices can offer to employees. However, BYOD can also pose a threat for the security of a company’s information network  and systems when used without the proper level of security. On May 15, 2013, the French Agency for the Security of Information Systems (ANSSI) released a technical paper advizing companies to implement stronger security measures when authorizing their employees to use electronic devices.

The agency notes that the current security standards used by companies are insufficient to protect efficiently their professional data. Electronic devices enable to store lots of data obtained directly (e.g., emails, agenda, contacts, photos, documents, SMS) or indirectly (navigation data, geolocation data, history). Some of this data may be considered sensitive by companies (e.g., access codes and passwords, security certifications) and may be used fraudulently to access business information stored on the company’s professional network. Thus, the use of electronic devices in the work place contains a risk that business data may be modified, destroyed or disclosed unlawfully. In particular, the risk of a data security breach deriving from the use of an electronic device is quite high due to the numerous functionalities that they offer. This risk is generally explained by the vulnerability of the information systems installed on electronic devices, but also the wrongful behaviour of employees who are not properly informed about the risks.

The Agency realizes that it is unrealistic to want to reach a high level of security when using mobile devices, regardless of the security parameters used. Nevertheless, the Agency recommends that companies implement certain security parameters in order to mitigate the risk of a security incident. These security parameters should be installed on the employee’s device within a unique profile that he/she cannot modify. In addition to the technical measures, companies should also implement organizational measures, such as a security policy and an internal document explaining to employees the authorized uses of IT systems and devices. Finally, those security measures should be reassessed throughout the lifecycle of the electronic device (i.e., inherent security of the device, security of the information system before the device is used by the employee, security conditions applied to the entire pool of electronic devices, reinitializing the electronic devices before they are reaffected).

The twenty-one security measures that are outlined in the Agency’s paper are categorized as follows:

- access control: renewal of the password every three months; automatic lock-down of the device after five minutes; use of a PIN code when sensitive data are stored on the device; limit the number of attempts to unlock the device;

- security of applications: prohibit the ‘by default’ use of the on-line store for applications; prohibit the unauthorized installation of applications; block the geolocation functionality when not used for certain applications; switch off the geolocation functionality when not used; install security patches on a regular basis;

- security of data and communications:  wireless connections (e.g., Bluetooth, Wi-Fi) must be deactivated when not used; avoid connecting to unknown wireless networks when possible; apply robust encryption to the internal storage of the device; sensitive data must be shared by using encrypted communication channels in order to maintain the confidentiality and integrity of the data; 

- security of the information system: automatically upgrade information systems on a regular basis by installing security patches; if needed, reinitialize the device entirely once per year.

The Agency explains that these security parameters are incompatible with a BYOD policy involving the combined use of an electronic device both for private and professional purposes. The Agency recommends that professional devices be used exclusively for that purpose (meaning that employees should have a separate device for private purposes), and if the same device is used professionally and privately, that both environments be separated efficiently.

The Agency’s paper is available (in French) by clicking on the following link: NP_Ordiphones_NoteTech[1]

Big data means all data

Posted on April 19th, 2013 by



There is an awesomeness factor in the way data about our digital comings and goings is being captured nowadays.  That awesomeness is such that it cannot even be described in numbers.  In other words, the concept of big data is not about size but about reach.  In the same way that the ‘wow’ of today’s computer memory will turn into a ‘so what’ tomorrow, references to terabytes of data are meaningless to define the power and significance of big data.  The best way to understand big data is to see it as a collection of all possible digital data.  Absolutely all of it.  Some of it will be trivial and most of it will be insignificant in isolation, but when put together its significance becomes clearer – at least to those who have the vision and astuteness to make the most of it.

Take transactional data as a starting point.  One purchase by one person is meaningful up to a point – so if I buy a cookery book, the retailer may be able to infer that I either know someone who is interested in cooking or I am interested in cooking myself.  If many more people buy the same book, apart from suggesting that it may be a good idea to increase the stock of that book, the retailer as well as other interested parties – publishers, food producers, nutritionists – could derive some useful knowledge from those transactions.  If I then buy cooking ingredients, the price of those items alone will give a picture of my spending bracket.  As the number of transactions increases, the picture gets clearer and clearer.  Now multiply the process for every shopper, at every retailer and every transaction.  You automatically have an overwhelming amount of data about what people do with their money – how much they spend, on what, how often and so on.  Is that useful information?  It does not matter, it is simply massive and someone will certainly derive value from it.  

That’s just the purely transactional stuff.  Add information about at what time people turn on their mobile phones, switch on the hot water or check their e-mail, which means of transportation they use to go where and when they enter their workplaces – all easily recordable.  Include data about browsing habits, app usage and means of communication employed.  Then apply a bit of imagination and think about this kind of data gathering in an Internet of Things scenario, where offline everyday activities are electronically connected and digitally managed.  Now add social networking interactions, blogs, tweets, Internet searches and music downloads.  And for good measure, include some data from your GPS, hairdresser and medical appointments, online banking activities and energy company.  When does this stop?  It doesn’t.  It will just keep growing.  It’s big data and is happening now in every household, workplace, school, hospital, car, mobile device and website.

What has happened in an uncoordinated but consistent manner is that all those daily activities have become a massive source of information which someone, somewhere is starting to make use of.  Is this bad?  Not necessarily.  So far, we have seen pretty benign and very positive applications of big data – from correctly spelt Internet searches and useful shopping recommendations to helpful traffic-free driving directions and even predictions in the geographical spread of contagious diseases.  What is even better is that, data misuses aside, the potential of this hugemongous amount of information is as big as the imagination of those who can get their hands on it, which probably means that we have barely started to scratch the surface of it all.

Our understanding of the potential of big data will improve as we become more comfortable and familiar with its dimensions but even now, it is easy to see its economic and social value.  But with value comes responsibility.  Just as those who extract and transport oil must apply utmost care to the handling of such precious but hazardous material, those who amass and manipulate humanity’s valuable data must be responsible and accountable for their part.  It is not only fair but entirely right that the greater the potential, the greater the responsibility, and that anyone entrusted with our information should be accountable to us all.  It should not be up to us to figure out and manage what others are doing with our data.  Frankly, that is simply unachievable in a big data world.  But even if we cannot measure the size of big data, we must still find a way to apportion specific and realistic responsibilities for its exploitation.

 

This article was first published in Data Protection Law & Policy in April 2013.

Designing privacy for mobile apps

Posted on March 16th, 2013 by



My phone is my best friend.  I carry it everywhere with me, and entrust it with vast amounts of my personal information, for the most part with little idea about who has access to that information, what they use it for, or where it goes.  And what’s more, I’m not alone.  There are some 6 billion mobile phone subscribers out there, and I’m willing to bet that most – if not all of them – are every bit as unaware of their mobile data uses as me.

So it’s hardly surprising that the Article 29 Working Party has weighed in on the issue with an “opinion on apps on smart devices” (available here).  The Working Party splits its recommendations across the four key players in the mobile ecosystem (app developers, OS and device manufacturers, app stores and third parties such as ad networks and analytics providers), with app developers receiving the bulk of the attention.

Working Party recommendations

Much of the Working Party’s recommendations don’t come as a great surprise: provide mobile users with meaningful transparency, avoid data usage creep (data collected for one purpose shouldn’t be used for other purposes), minimise the data collected, and provide robust security.  But other recommendations will raise eyebrows, including that:

(*)  the Working Party doesn’t meaningfully distinguish between the roles of an app publisher and an app developer – mostly treating them as one and the same.  So, the ten man design agency engaged by Global Brand plc to build it a whizzy new mobile app is effectively treated as having the same compliance responsibilities as Global Brand, even though it will ultimately be Global Brand who publicly releases the app and exploits the data collected through it;

(*)  the Working Party considers EU data protection law to apply whenever a data collecting app is released into the European market, regardless of where the app developer itself is located globally.  So developers who are based outside of Europe but who enjoy global release of their app on Apple’s App Store or Google Play may unwittingly find themselves subjected to EU data protection requirements;

(*)  the Working Party takes the view that device identifiers like UDID, IMEI and IMSI numbers all qualify as personal data, and so should be afforded the full protection of European data protection law.  This has a particular impact on the mobile ad industry, who typically collect these numbers for ad serving and ad tracking purposes, but aim to mitigate regulatory exposure by carefully avoiding collection of “real world” identifiers;

(*)  the Working Party places a heavy emphasis on the need for user opt-in consent, and does not address situations where the very nature of the app may make it so obvious to the user what information the app will collect as to make consent unnecessary (or implied through user download); and

(*)  the Working Party does not address the issue of data exports.  Most apps are powered by cloud-based functionality and supported by global service providers meaning that, perhaps more than in any other context, the shortfalls of common data export solutions like model clauses and safe harbor become very apparent.

Designing for privacy
Mobile privacy is hard.  In her guidance on mobile apps, the California Attorney-General rightly acknowledged that: “Protecting consumer privacy is a team sport. The decisions and actions of many players, operating individually and jointly, determine privacy outcomes for users. Hardware manufacturers, operating system developers, mobile telecommunications carriers, advertising networks, and mobile app developers all play a part, and their collaboration is crucial to enabling consumers to enjoy mobile apps without having to sacrifice their privacy.
Building mobile apps that are truly privacy compliant requires a privacy by design approach from the outset.  But, for any mobile app build, there are some top tips that developers should be aware of:
  1. Always, always have a privacy policy.  The poor privacy policy has been much maligned in recent years but, whether or not it’s the best way to tell people what you do with their information (it’s not), it still remains an expected standard.  App developers need to make sure they have a privacy policy that accurately reflects how they will use and protect individuals’ personal information and make this available both prior to download (e.g. published on the app store download page) and in-app.  Not having this is a sure fire way to fall foul of privacy authorities – as evidenced in the ongoing Delta Airlines case.
  2. Surprise minimisation.  The Working Party emphasises the need for user consents and, in certain contexts, consent will of course be appropriate (e.g. when accessing real-time GPS data).  But, to my mind, the better standard is that proposed by the California Attorney-General of “surprise minimisation”, which she explains as the use of “enhanced measures to alert users and give them control over data practices that are not related to an app’s basic functionality or that involve sensitive information.” Just-in-time privacy notices combined with meaningful user controls are the way forward.
  3. Release “free” and “premium” versions.  The Working Party says that individuals must have real choice over whether or not apps collect personal information about them.  However, developers will commonly complain that real choice simply isn’t an option – if they’re going to provide an app for free, then they need to collect and monitise data through it (e.g. through in-app targeted advertising).  An obvious solution is to release two versions of the app – one for “free” that is funded by exploiting user data and one that is paid for, but which only collects user data necessary to operate the app.  That way, users that don’t want to have their data monitised can choose to download the paid for “premium” version instead – in other words, they have choice;
  4. Provide privacy menu settings.   It’s suprising how relatively few apps offer this, but privacy settings should be built into app menus as a matter of course – for example, offering users the ability to delete app usage histories, turn off social networking integration, restrict location data use etc.  Empowered users are happy users, and happy users means happy regulators; and
  5. Know Your Service Providers.  Apps serve as a gateway to user data for a wide variety of mobile ecosystem operators – and any one of those operators might, potentially, misuse the data it accesses.  Developers need to be particularly careful when integrating third party APIs into their apps, making sure that they properly understand their service providers’ data practices.  Failure to do proper due diligence will leave the developer exposed.

Any developer will tell you that you don’t build great products by designing to achieve compliance; instead, you build great products by designing a great user experience.  Fortunately, in privacy, both goals are aligned.  A great privacy experience is necessarily part and parcel of a great user experience, and developers need to address users’ privacy needs at the earliest stages of development, through to release and beyond.

2013 to be the year of mobile regulation?

Posted on January 4th, 2013 by



After a jolly festive period (considerably warmer, I’m led to understand, for me in Palo Alto than for my colleagues in the UK), the New Year is upon us and privacy professionals everywhere will no doubt be turning their minds to what 2013 has in store for them.  Certainly, there’s plenty of developments to keep abreast of, ranging from the ongoing EU regulatory reform process through to the recent formal recognition of Binding Corporate Rules for processors.  My partner, Eduardo Ustaran, has posted an excellent blog outlining his predictions here.

But one safe bet for greater regulatory attention this year is mobile apps and platforms.  Indeed, with all the excitement surrounding cookie consent and EU regulatory reform, mobile has remained largely overlooked by EU data protection authorities to date.  Sure, we’ve had the Article 29 Working Party opine on geolocation services and on facial recognition in mobile services.  The Norwegian Data Protection Inspectorate even published a report on mobile apps in 2011 (“What does your app know about you?“).  But really, that’s been about it.  Pretty uninspiring, not to mention surprising, when consumers are fast abandoning their creaky old desktop machines and accessing online services through shiny new smartphones and tablets: Forbes even reports that mobile access now accounts for 43% of total minutes spent on Facebook by its users.

Migration from traditional computing platforms to mobile computing is not, in and of itself, enough to guarantee regulator interest.  But there are plenty of other reasons to believe that mobile apps and platforms will come under increased scrutiny this year:

1.  First, meaningful regulatory guidance is long overdue.  Mobiles are inherently more privacy invasive than any other computing platform.  We entrust more data to our mobile devices (in my case, my photos, address books, social networking, banking and shopping account details, geolocation patterns, and private correspondence) than any other platform and generally with far less security – that 4 digit PIN really doesn’t pass muster.  We download apps from third parties we’ve often scarcely ever heard of, with no idea as to what information they’re going to collect or how they’re going to use it, and grant them all manner of permissions without even thinking – why, exactly, does that flashlight app need to know details of my real-time location?  Yet despite the huge potential for privacy invasion, there persists a broad lack of understanding as to who is accountable for compliance failures (the app store, the platform provider, the network provider or the app developer) and what measures they should be implementing to avoid privacy breaches in the first place.  This uncertainty and confusion makes regulatory involvement inevitable.

2.  Second, regulators are already beginning to get active in the mobile space – if this were not the case, the point above would otherwise be pure speculation.  It’s not, though.  On my side of the Pond, we’ve recently seen the California Attorney General file suit against Delta Air Lines for its failure to include a privacy policy within its mobile app (this action itself following letters sent by the AG to multiple app providers warning them to get their acts together).  Then, a few days later, the FTC launched a report on children’s data collection through mobile apps, in which it indicated that it was launching multiple investigations into potential violations of the Children’s Online Privacy Protection Act (COPPA) and the FTC Act’s unfair and deceptive practices regime.  The writing is on the wall, and it’s likely EU regulators will begin following the FTC’s lead.

3.  Third, the Article 29 Working Party intends to do just that.  In a press release in October, the Working Party announced that “Considering the rapid increase in the use of smartphones, the amount of downloaded apps worldwide and the existence of many small-sized app-developers, the Working Party… [will] publish guidance on mobile apps… early next year.” So guidance is coming and, bearing in mind that the Article 29 Working Party is made up of representatives from national EU data protection authorities, it’s safe to say that mobile privacy is riding high on the EU regulatory agenda.

In 2010, the Wall Street Journal reported: “An examination of 101 popular smartphone “apps”—games and other software applications for iPhone and Android phones—showed that 56 transmitted the phone’s unique device ID to other companies without users’ awareness or consent. Forty-seven apps transmitted the phone’s location in some way. Five sent age, gender and other personal details to outsiders… Many apps don’t offer even a basic form of consumer protection: written privacy policies. Forty-five of the 101 apps didn’t provide privacy policies on their websites or inside the apps at the time of testing.“  Since then, there hasn’t been a great deal of improvement.  My money’s on 2013 being the year that this will change.

Technology issues that will shape privacy in 2013

Posted on December 13th, 2012 by



Making predictions as we approach a new year has become a bit of a tradition.  The degree of error is typically proportional to the level of boldness of those predictions, but as in the early days of weather forecasting, the accuracy expectations attached to big statements about what may or may not happen in today’s uncertain world are pretty low.  Having said that, it wouldn’t be particularly risky to assume that during 2013, the EU legislative bodies will be thinking hard about things like whether the current definition of personal data is wide enough, what kind of security breach should trigger a public disclosure, the right amount for monetary fines or the scope of the European Commission’s power to adopt ‘delegated acts’.  But whilst it is easy to get distracted by the fascinating data protection legislative developments currently taking place in the EU, next year’s key privacy developments will be significantly shaped by the equally fascinating technological revolution of our time.

A so far low profile issue from a regulatory perspective has been the ever growing mobile app phenomenon.  Like having a website in the late 90s, launching a mobile app has become a ‘must do’ for any self-respecting consumer-facing business.  However, even the simplest app is likely to be many times more sophisticated than the early websites and will collect much more useful and clever data about its users and their lifestyles.  That is a fact and, on the whole, apps are a very beneficial technological development for the 21st century homo-mobile.  The key issue is how this development can be reconciled with the current data protection rules dealing with information provision, grounds for processing and data proportionality.  Until now, technology has as usual led the way and the law is clumsily trying to follow, but in the next few months we are likely to witness much more legal activity on this front than what we have seen to date.

Mobile data collection via apps has been a focus of attention in theUSAfor a while but recent developments are a clue to what is about to happen.  The spark may well have been ignited by the California Attorney General who in the first ever legal action under the state’s online privacy law, is suing Delta Air Lines for distributing a mobile application without a privacy policy.  Delta had reportedly been operating its mobile app without a privacy policy since at least 2010 and did not manage to post one after being ordered by the authorities to do so.  On a similar although slightly more alarming note, children’s mobile game company Mobbles is being accused by the Center for Digital Democracy of violating COPPA, which establishes strict parental consent rules affecting the collection of children’s data.  These are unlikely to be isolated incidents given that app operators tend to collect more data than what is necessary to run the app.  In fact, these cases are almost certainly the start of a trend that will extend toEuropein 2013 and lead EU data protection authorities and mobile app developers to lock horns on how to achieve a decent degree of compliance in this environment.

Speaking of locking horns, next year (possibly quite early on) we will see the first instances of enforcement of the cookie consent requirement.  What is likely to be big about this is not so much the amount of the fines or the volume of enforcement actions, but the fact that we will see for real what the regulators’ compliance expectations actually are.  Will ‘implied consent’ become the norm or will websites suddenly rush to present their users with hard opt-in mechanisms before placing cookies on their devices?  Much would need to change for the latter to prevail but at the same time, the ‘wait and see’ attitude that has ruled to date will be over soon, as the bar will be set and the decision to comply or not will be based purely on risk – an unfortunate position to be in, caused by an ill-drafted law.  Let that be a lesson for the future.

The other big technological phenomenon that will impact on privacy and security practices – probably in a positive way – will be the cloud.  Much has been written on the data protection implications of cloud computing in the past months.  Regulators have given detailed advice.  Policy makers have made grand statements.  But the real action will be seen in 2013, when a number of leaders in the field start rolling out Binding Safe Processor Rules programmes and regulators are faced with the prospect of scrutinising global cloud vendors’ data protection offerings.  Let us hope that we can use this opportunity to listen to each other’s concerns, agree a commercially realistic set of standards and get the balance right.  That would be a massive achievement.

 

This article was first published in Data Protection Law & Policy in December 2012.

Geolocation in the spotlight

Posted on May 23rd, 2011 by



No avid reader of Article 29 Working Party opinions would be surprised to see statements such as “location data from smart mobile devices are personal data” or “the combination of the unique MAC address and the calculated location of a WiFi access point should be treated as personal data”. However, when those statements appear alongside references to the night table next to someone’s bed, or the fact that specific locations reveal data about someone’s sex life, one can’t stop wondering whether an intended clarification of the applicable legal framework to geolocation services available on smart mobile devices is getting a bit sensationalistic.

Let’s get the basic facts right first: every electronic exchange of information is recorded somewhere - emails sent, web pages visited, telephone calls made, credit card transactions, etc. It is in the nature of the digital age. Smartphones and the like represent the latest form of communications technology and, as such, mobile communications leave behind some of the most sophisticated records that digital technology can generate. So a full assessment of the rules affecting the use of smartphones should go beyond a textbook interpretation of European data protection law and look at whether the collection and use of this information has an impact on people’s privacy and data security.

Some of the information generated by our day to day use of mobile communication devices will no doubt be very private. For example, the concepts of “traffic data” and “location data” are carefully defined by EU law and their use is strictly regulated because it is perceived as sufficiently sensitive. Although there are some subtle differences, in both cases the lawful use of such data normally involves obtaining the consent of the individual. However, in the case of location data, consent is not required if the data is anonymous.

This is a crucial point in the context of smartphones-generated data which the Working Party Opinion does not fully appreciate in its recent opinion on geolocation services. This is unfortunate because instead of acknowledging the different types of information that a smart mobile device may produce, all data is dumped into the same bucket. The assumption seems to be that all data collected through a smartphone device should be regarded as personal data despite the fact that some of the data does not identify the device’s user, or that the uses made of such data will never involve singling out an individual.

According to the Working Party, because location data from smart mobile devices reveals intimate details about the private life of their owner, the main applicable legitimate ground is prior informed consent. Again, this is a massive generalisation of the multiples modalities of geolocation services, many of which will rely on anonymous data or, at least, data which is not meant to identify or affect a particular user. Therefore, requiring consent from individuals may go further than what the EU legal framework intended.

For many human beings, life without a smart mobile device would be unimaginable. That is a slightly scary thought and regulators have a duty to scrutinise the data protection implications of new technologies that have the power to radically affect our lives. Clarifying how data protection law interacts with continuously evolving geolocation services is a laudable aim from which everyone can benefit. But unfortunately, a black and white approach to this issue conveys an unhealthy sense of panic and, even worse, distracts us from the fundamental challenge: spotting the real threats to our privacy and security that may be caused by rapid and imperfect technological development.

This article was first published in Data Protection Law & Policy in May 2011

Let’s not panic about smartphones

Posted on May 18th, 2011 by



Today’s Metro’s headline “Android phones all leak secrets” (placed next to a photo of a gloomy looking Arnie for added dramatic effect) was a fitting prelude to the publication of the latest Article 29 Working Party Opinion on geolocation services on smart mobile devices. The message of both pieces seemed to be very similar: enjoy your smartphone at your peril! Is it really that bad?

Let’s get the basic facts right first: every electronic exchange of information is recorded somewhere – emails sent, web pages visited, telephone calls made, credit card transactions, etc. It is in the nature of the digital age. Smartphones and the like represent the latest form of communications technology and, as such, mobile communications leave behind some of the most sophisticated records that digital technology can generate. The issue is whether the collection and use of this information has an impact on people’s privacy and data security.

The concepts of “traffic data” and “location data” are defined by EU law and their use is strictly regulated because it is perceived as sufficiently sensitive. Although there are some subtle differences, in both cases the lawful use of such data involves obtaining the consent of the individual. However, in the case of location data, consent is not required if the data is anonymous.

This is a crucial point in the context of smartphones-generated data that the Working Party Opinion does not fully address. According to the Working Party, because location data from smart mobile devices reveals intimate details about the private life of their owner, the main applicable legitimate ground is prior informed consent. This is a massive generalisation of the multiples modalities of geolocation services, many of which will rely on anonymous data or, at least, data which is not meant to identify or affect a particular user. Therefore, requiring consent may go further than what the EU legal framework intended.

Unfortunately, a black and white approach to this issue conveys an unhealthy sense of panic and, even worse, distracts us from the real challenge: spotting the real threats to our privacy and security that may be caused by rapid and imperfect technological development.