Archive for the ‘Social networking’ Category

German Federal Court: “send-to-a-friend” emails are SPAM

Posted on November 7th, 2013 by



In a recent decision of 12 September 2013 (court ref. I ZR 208/12), the German Federal Court of Justice ruled that e-mails sent via “send-to-a-friend” functionality on websites must be considered illegal spam email unless the recipient expressly consented to receive the email. According to the court, responsibility to obtain consent rests with the website service provider, not the user. The court further held that it is irrelevant that the act of sending was initiated by a user, since the indirect promotional nature of ‘send-to-a-friend’ e-mails falls  within the scope of German direct marketing regulation under Sec. 7 German Unfair Competition Act.

“Send-to-a-friend” functionality allows users to send an e-mail from the website to a third-party recipient linking to interesting content on the website.   In this particular case, the e-mail was sent through the mail server of the website provider and in the name of the website provider. As a consequence, the Federal Court ruled that the “send-to-a-friend” functionality must be considered illegal under German law.

The court emphasised illegality where the website provider appears as the sender of the recommendation email, as it is virtually impossible for the provider to meet the requirements for express consent under Sec. 7 Unfair Competition Act.  However, chances are that its reasoning would not have been different had the user been identified as the sender since the court predominantly focussed on the promotional intention of the website provider  in its ruling.    Further, if the user had appeared  as the sender, this could give rise to other claims under unfair competition law on the basis concealing   the identity of the advertiser in a promotional e-mail.

Under German Unfair Competition Law, the sending of commercial e-mails is subject to a strict and express consent requirement, usually following the so-called “double-opt-in” mechanic , i.e. the advertiser must not only obtain consent at the time of collecting the e-mail address, but  also ensure that the user who provided the e-mail address is the owner of the account by sending a confirmation email with a link for the user to click on to confirm his or her consent. In practice, these requirements seem will be very challenging to obtain for “send-to-a-friend” functionality.

Recommendations for marketers

Nevertheless, “send-to-a-friend” marketing remains a popular and powerful tool for advertisers, and this latest ruling is unlikely to diminish its popularity in the short term.  Website providers who wish to continue using “send-to-a-friend” marketing in Germany can mitigate risk by:

1.  Clearly disclosing to the user that he or she should only use the feature if they have sufficient reason to assume that the recipient consents to receive the recommendation email

2.  Identifying the user as the sender of the e-mail, not the website.

3.  Not sending “send-to-a-friend” e-mails to individuals who have previously opted out of receiving marketing communications from the provider.  An opt-out link should also be included in every “send-to-a-friend” e-mail.

4.   Capping the number of messages a user is allowed to send and not incentivising sending by, for example, offering additional competition entries for each e-mail sent (currently common in many prize draw mechanics) .

However, while taking the above measures will limit enforcement risks on a practical level, from a purely legal point of view  it seems that exposure can only be fully avoided by removing “send-to-a-friend” features from websites.  Whether or not this spells the end of “send-to-a-friend” functionality in Germany in the longer term will depend on the level and significance of any enforcement activity by individuals, competitors and/or consumer protection associations  following the court’s ruling.

The Internet and the Great Data Deletion Debate

Posted on August 15th, 2013 by



Can your data, once uploaded publicly onto the Web, ever realistically be forgotten?  This was the debate I was having with a friend from the IAPP last night.  Much has been said about the EU’s proposals for a ‘right to be forgotten’ but, rather than arguing points of law, we were simply debating whether it is even possible to purge all copies of an individual’s data from the Web.

The answer, I think, is both yes and no: yes, it’s technically possible, and no, it’s very unlikely ever to happen.  Here’s why:

1. To purge all copies of an individual’s data from the Web, you’d need either (a) to know where all copies of those data exist on the Web, or (b) the data would need some kind of built-in ‘self-destruct’ mechanism so that it knows to purge itself after a set period of time.

2.  Solution (a) creates as many privacy issues as it solves.  You’d need either to create some kind of massive database tracking where all copies of data go on the Web or each copy of the data would need, somehow, to be ‘linked’ directly or indirectly to all other copies.  Even assuming it was technically feasible, it would have a chilling effect on freedom of speech – consider how likely a whistleblower would be to post content knowing that every content of that copy could be traced back to its original source.  In fact, how would anyone feel about posting content to the Internet knowing that every single subsequent copy could easily be traced back to their original post and, ultimately, back to them?

3.  That leaves solution (b).  It is wholly possible to create files with built in self-destruct mechanisms, but they would no longer be pure ‘data’ files.  Instead, they would be executable files – i.e. files that can be run as software on the systems on which they’re hosted.  But allowing executable data files to be imported and run on Web-connected IT systems creates huge security exposure – the potential for exploitation by viruses and malicious software would be enormous.  The other possibility would be that the data file contains a separate data field instructing the system on which it is hosted when to delete it – much like a cookie has an expiry date.  That would be fine for propietary data formats on closed IT systems, but is unlikely to catch on across existing, well-established and standardised data formats like .jpgs, .mpgs etc. across the global Web.  So the prospects for solution (b) catching on also appear slim.

What are the consequence of this?  If we can’t purge copies of the individuals’ data spread across the Internet, where does that leave us?  Likely the only realistic solution is to control the propogation of the data at source in the first place.  Achieving that is a combination of:

(a)  Awareness and education – informing individuals through privacy statements and contextual notices how their data may be shared, and educating them not to upload content they (or others) wouldn’t want to share;

(b)  Product design – utilising privacy impact assessments and privacy by design methodologies to assess product / service intrusiveness at the outset and then designing systems that don’t allow illegitimate data propogation; and

(c)  Regulation and sanctions – we need proportionate regulation backed by appropriate sanctions to incentivise realistic protections and discourage illegitimate data trading.  

No one doubts that privacy on the Internet is a challenge, and nowhere does it become more challenging than with the speedy and uncontrolled copying of data.   But let’s not focus on how we stop data once it’s ‘out there’ – however hard we try, that’s likely to remain an unrealistic goal.  Let’s focus instead on source-based controls – this is achievable and, ultimately, will best protect individuals and their data.

A Brave New World Demands Brave New Thinking

Posted on June 3rd, 2013 by



Much has been said in the past few weeks and months about Google Glass, Google’s latest innovation that will see it shortly launch Internet-connected glasses with a small computer display in the corner of one lens that is visible to, and voice-controlled by, the wearer. The proposed launch capabilities of the device itself are—in pure computing terms—actually relatively modest: the ability to search the web, bring up maps, take photographs and video and share to social media.

So far, so iPhone.

But, because users wear and interact with Google Glass wherever they go, they will have a depth of relationship with their device that far exceeds any previous relationship between man and computer. Then throw in the likely short- to mid-term evolution of the device—augmented reality, facial recognition—and it becomes easy to see why Google Glass is so widely heralded as The Next Big Thing.

Of course, with an always-on, always-worn and always-connected, photo-snapping, video-recording, social media-sharing device, the privacy issues are a-plenty, ranging from the potential for crowd-sourced law enforcement surveillance to the more mundane forgetting-to-remove-Google-Glass-when-visiting-the-men’s-room scenario. These concerns have seen a very heated debate play out across the press, on TV and, of course, on blogs and social media.

But to focus the privacy debate just on Google Glass really misses the point. Google Glass is the headline-grabber, but in reality it’s just the tip of the iceberg when it comes to the wearable computing products that will increasingly be hitting the market over the coming years. Pens, watches, glasses (Baidu is launching its own smart glasses too), shoes, whatever else you care to think of—will soon all be Internet-connected. And it doesn’t stop at wearable computing either; think about Internet-connected home appliances: We can already get Internet-connected TVs, game consoles, radios, alarm clocks, energy meters, coffee machines, home safety cameras, baby alarms and cars. Follow this trend and, pretty soon, every home appliance and personal accessory will be Internet-connected.

All of these connected devices—this “Internet of Things”—collect an enormous volume of information about us, and in general, as consumers we want them: They simplify, organize and enhance our lives. But, as a privacy community, our instinct is to recoil at the idea of a growing pool of networked devices that collect more and more information about us, even if their purpose is ultimately to provide services we want.

The consequence of this tends to be a knee-jerk insistence on ever-strengthened consent requirements and standards: Surely the only way we can justify such a vast collection of personal information, used to build incredibly intricate profiles of our interests, relationships and behaviors, is to predicate collection on our explicit consent. That has to be right, doesn’t it?

The short answer to this is “no”—though not, as you might think, for the traditionally given reasons that users don’t like consent pop-ups or that difficulties arise when users refuse, condition or withdraw their consents. 

Instead, it’s simply that explicit consent is lazy. Sure, in some circumstances it may be warranted, but to look to explicit consent as some kind of data collection panacea will drive poor compliance that delivers little real protection for individuals.

Why? 

Because when you build compliance around explicit consent notices, it’s inevitable that those notices will become longer, all-inclusive, heavily caveated and designed to guard against risk. Consent notices become seen as a legal issue, not a design issue, inhibiting the adoption of Privacy by Design development so that—rather than enhancing user transparency, they have the opposite effect. Instead, designers build products with little thought to privacy, safe in the knowledge that they can simply ‘bolt on’ a detailed consent notice as a ‘take it or leave it’ proposition on installation or first use, just like terms of service are now. And, as technology becomes ever more complicated, so it becomes ever more likely that consumers won’t really understand what it is they’re consenting to anyway, no matter how well it’s explained. It’s also a safe bet that users will simply ignore any notice that stands between them and the service they want to receive. If you don’t believe me, then look at cookie consent as a case in point.

Instead, it’s incumbent upon us as privacy professionals to think up a better solution. One that strikes a balance between the legitimate expectations of the individual with regard to his or her privacy and the legitimate interests of the business with regard to its need to collect and use data. One that enables the business to deliver innovative new products and services to consumers in a way that demonstrates respect for their data and engenders their trust and which does not result in lazy, consent-driven compliance. One that encourages controllers to build privacy functionality into their products from the very outset, not address it as an afterthought.

Maybe what we need is a concept of an online “personal space.”

In the physical world, whether through the rules of social etiquette, an individual’s body language or some other indicator, we implicitly understand that there is an invisible boundary we must respect when standing in close physical proximity to another person. A similar concept could be conceived for the online world—ironically, Big Data profiles could help here. Or maybe it’s as simple as promoting a concept of “surprise minimization” as proposed by the California attorney general in her guidance on mobile privacy—the concept that, through Privacy by Design methodologies, you avoid surprising individuals by collecting data from or about them that, in the given context, they would not expect or want.

Whatever the solution is, we’re entering a brave new world; it demands some brave new thinking.

This post first published on the IAPP Privacy Perspectives here.

Positive ruling for US businesses adopting single EU controller model?

Posted on February 19th, 2013 by



In two preliminary decisions, the Administrative Court of German Federal State Schleswig-Holstein ruled last week that two administrative acts which had been issued by the DPA of Schleswig-Holstein (ULD) against Facebook Inc. and Facebook Ireland Ltd. cannot be enforced until a decision in the main proceedings is made (ref. nos. 8 B 60/12 and 8 B 61/12). What at first sight seems to be only a side aspect in the ULD´s battle against the handling of personal data by the world´s largest social network has some fundamental implications as the court denied the applicability of German data protection law on the company´s German activities at all.

In its preliminary decisions, the court followed Facebook´s argument that only Facebook Ireland Ltd. is relevant for the determination of applicable law, as its German entity solely provides supporting services (marketing and acquisition) and is not involved in the processing of personal data. Facebook Ireland would be the only European entity with direct control about user data of non-US users. Other European entities would not be involved in the processing of personal data. The court regarded it irrelevant whether Facebook Inc. (USA) would be the sole controller of personal data, or whether it would be joint controller together with Facebook Ltd. (Ireland), as Facebook Ltd. must be regarded as an establishment of Facebook Inc. which processes personal data in the course of its business operations. The court stated that Facebook Ltd., with its 400 employees and its infrastructure in Dublin “implies the effective and real exercise of activity through stable arrangements” within the meaning of recital 19 of the Directive, and thus fulfills the requirements for an “establishment” under Art.  4 (1)(a) of Directive 95/46/EC.

Further, the court stated it would not be relevant where the servers are located on which the data is stored and processed as Art. 4 (1) (a) of Directive 95/46/EC only requires that the processing is carried out “in the context of the activities of an establishment of the controller”, so that Facebook Ltd. must be regarded as an establishment within the meaning of Art. 4 (1) (a) of Directive 95/46/EC even if the technical infrastructure is located in the US.

The background of the case is that the ULD had issued two identical  administrative orders against Facebook Inc. and Facebook Ireland Ltd. in December 2012 to force the company to unlock aliased user accounts that had been locked by Facebook. The ULD regards Facebook´s policy that users must use full and correct names for their profiles to be in violation of German data protection regulation and the German Telemedia Act, which stipulate that an anonymous/aliased use of the internet services must be offered where possible. The ULD also made the order immediately enforceable, and only this additional element to the order was subject to the preliminary ruling of the court.

It must thus be borne in mind that the decision is only preliminary and based on a consideration of interests rather than a thorough legal consideration. The main criterion for the court was whether the interest of the DPA in an immediate enforcement supersedes Facebook´s interest in the suspension of the enforcement. The legal assessment, although part of that consideration, is not binding and will be further scrutinized in the main proceedings. Also, the DPA of Schleswig-Holstein has lodged a complaint against the decision.

Conclusions: In general, the decisions of the administrative court support the validity of a structure that various US internet businesses use in Europe to mitigate potential exposure to multiple EU data protection regimes, i.e. appointing a single European subsidiary to assume controllership of European users’ personal data, while other European subsidiaries provide supporting services in the areas of marketing and distribution. However, the decision also shows that the setup of a European structure must be carefully shaped as the court put specific emphasis on the “stable arrangements” and the personnel and infrastructural configuration of the establishment. This makes clear that “letterbox offices” will not be accepted, and that only a legal setup that reflects the reality of the business may qualify as an establishment under the Directive.

As a further important point to note, the court also held that EU data protection law does not require the IT infrastructure to be located on European soil. In this regard, it must be noted that Directive 95/46/EC potentially allows for an opposing interpretation; and it should be closely monitored whether the position of the Administrative Court of Schleswig-Holstein finds support in potential appellate proceedings.

Stronger EU data protection rules in the pipeline

Posted on November 8th, 2011 by



Here is the latest announcement from the European Commission concerning the reform of the data protection directive, following a meeting yesterday between the EU Justice Commissioner Viviane Reding and Germany’s Federal Minister for Consumer Protection Ilse Aigner:

http://europa.eu/rapid/pressReleasesAction.do?reference=MEMO/11/762&type=HTML

In a nutshell:

* The proposal will be published by the end of January 2012.
* Consumers in Europe should see their data strongly protected.
* Companies who direct their services to European consumers will be subject to EU data protection laws.
* Social networks will be caught by EU law, even where based in a third country and where data is stored in the cloud.
* Consumers must be more empowered than they are today, particularly by giving their explicit consent before their data is used and by having the right to delete their data at any time.

These are obviously very broad brush political statements but they suggest that a tougher regime is in the pipeline.

Happy rentrée

Posted on September 1st, 2011 by



With the summer holiday season coming to an end, it is time for the annual rentrée – back to school, back to work and back to our never ending roster of tricky yet stimulating privacy-related challenges.

And what an exciting rentrée this one is for the privacy and data protection world.  For those who have just finished putting away their swimming costumes and beach towels, here is a very quick update on what is happening right now:

  • *     The European Commission is giving the final touches to the legislative reform proposals that will eventually replace the 1995 data protection directive.  Expect big changes on applicable law, mechanisms to put people in control of their data, an emphasis on transparency, a fully blown ‘accountability package’ and innovations on adequacy for international data transfers.
  • *     France has now implemented the cookie consent rule, which seems to allow implied consent via browser settings.  For an at-a-glance look at where other EU jurisdictions are on this process and their likely stance across Europe, have a look at our cookie consent tracking table.  Also, stay tuned to the IAPP website for a forthcoming webminar on this issue.
  • *     In the meantime, the Article 29 Working Party has given its verdict on the proposed self-regulatory framework for online behavioural advertising and said that the framework does not meet the consent requirements.  However, there should still be room for a fully compliant approach that does not necessary involve bombarding Internet users with pop up windows and tick boxes.
  • *     The Article 29 Working Party is also working on streamlining further the BCR approval process in anticipation of the likely explicit recognition in the forthcoming data protection legal framework.  For a full update on what is going on, make sure you attend the BCR Masterclass on 27 September.
  • *     The DPA for the German state of Schleswig-Holstein has ordered website owners in that state to remove social plug-ins such as the ‘like’ button from their sites by the end of this month or they will face enforcement action.  Such a draconian action seems completely out of sync with what is happening in the real world and the growing uptake by businesses and organisations of social and professional networking tools.  What can possibly happen next?

Lots to come to terms with in the coming weeks…  Plus looking forward to catching up in person with those attending the IAPP Academy in Dallas.