In the wake of the Edward Snowden’s intelligence leaks and increasing concerns about the use of personal information, the Center for Digital Democracy recently filed a Fair Trade Commission complaint alleging that 30 US Databrokers and data management firms had violated the European Union’s Privacy Directive Safe Harbor framework.  According to the CDD, the collection of private data of EU residents, including online tracking, purchasing history, addresses, income and family structures, each violates EU Safe Harbor commitments made by the companies as required by the EU Privacy Directive. 

What is the Safe Harbor Framework and Why is it Useful?

The EU Privacy Directive establishes the protection of one’s personal data as a fundamental human right and prohibits the transmission of such data outside of the EU unless the covered entity or individual can certify that “adequate safeguards” are in place. This of course, raises issues when EU-protected personal data needs to be sent cross-border to U.S. businesses because the EU does not view the U.S. as having adequate safeguards. 

Exceptions are made where U.S. companies use EU-approved standard contractual clauses (SCCs), which embody key EU privacy principles. In the case of transfers of personal data across EU borders within a multinational corporation, the EU has issued approved binding corporate rules (BCRs).

Yet, the biggest exception to the directive’s prohibitions on transmission of personal data is the EU’s “safe harbor”.  Under that safe harbor, data can be transmitted to third party nations where “the third country in question ensures an adequate level of protection and the [EU] laws implementing other provisions of the Directive are respected prior to the transfer.”  Companies seeking protection of the safe harbor certify their compliance with the Directive’s seven privacy principles and subject to themselves to enforcement by the Federal Trade Commission in the event of non-compliance. More than 3,000 U.S. businesses have enrolled in the Safe Harbor program, and it underlies millions of data transfers from the EU. 

U.S. Criticized for Lax Enforcement of Safe Harbor

The EU Data Protection Authority and the CDD have each recently criticized the FTC for its weaker enforcement of what the EU deems to be privacy violations. And the CDD’s complaint alleges more than just personal data has been used by the 30 companies it targeted in its FTC complaint.  As CDD’s Legal Director Hudson Kingston has explained, “CDDs complaint describes the systemic failure of the Safe Harbor to function as it was intended. Companies are flouting standards that the Department of Commerce agreed to and the Federal Trade Commission pledged to enforce . . . The fundamental privacy right of 500 million Europeans has been ignored and must be acknowledged and protected going forward.”

Jeff Chester, CDD’s executive director further elaborated in in a statement:  “Instead of ensuring that the U.S. lives up to its commitment to protect EU consumers, our investigation found that there is little oversight and enforcement by the FTC. The Big Data-driven companies in our complaint use Safe Harbor as a shield to further the information-gathering practices without serious scrutiny . . . Our investigation found that many of the companies are involved with a web of powerful multiple data broker partners who, unknown to the EU public, pool their data on individuals so they can be profiled and targeted online

FTC Steps Up Safe Harbor Enforcement

 In an apparent response to some of these criticisms, the FTC has started to more actively enforce safe harbor violations in 2014. In January of this year the FTC announced it had settled privacy violations with 12 companies.  Then, in June 2014, the FTC announced that it had settled privacy violations under the safe harbor with 14 U.S. companies.  We expect increasing enforcement to continue in light of actions like the CDD complaint.

The National Labor Relations Board has found that another employer (a non-union employer) violated its employees’ protected concerted activity rights under the National Labor Relations Act (NLRA) when it disciplined and fired them for certain social media activity. Our Labor Group provides an extensive analysis of this decision in Triple Play Sports Bar and Grille, 361 NLRB No. 31 (2014).

The analysis of the issues in Triple Play, you will see, is quite fact intensive and requires some thought in applying the applicable legal principles – and that is just addressing the NLRA issues. When companies are faced with adverse social media activity or campaigns, whether it be by employees, customers, bloggers, etc., they frequently are unprepared to take the appropriate steps to investigate, or to weigh the legal, business and other risks in deciding what actions, if any, to take. The situation in Triple Play, and other activity in social media, provide good reason for companies to be better prepared and to have a plan. Many companies may already have a crisis management plan or a communications policy, but those plans and policies need to reflect the nuances of social media and other factors, such as who is engaging in the activity and what information is being communicated.

Here are some basic questions/issues that should be considered in any plan, which are by no means exhaustive:

  • Should we have resources proactively monitoring social media activity and communications that potentially affect the company, and what limitations should there be on that monitoring?
  • Who in the company should receive initial reports of a potential problem?
  • Who should be involved in the investigation? Do we need third-party forensic expertise?
  • Do we have insurance coverage for the particular incident?
  • How will the persons involved in the activity – employees, customers, bloggers, etc. – affect the process from a legal, business or other perspective?
  • How did we learn about, get access to the activity – was it permissible under the Stored Communications Act (SCA), the Electronic Communications Privacy Act (ECPA), state laws concerning social media passwords?
  • Is the information being communicated accurately?
  • Are we acting consistent with our own privacy and other policies in connection with the investigation?
  • Is the activity/communication protected – protections may exist under First Amendment, the NLRA, whistleblowing, or other sources?
  • Do we need to respond? How have we responded in the past to similar situations? Will a response only make things worse? If a response is warranted, what should it be?
  • What can we learn from this incident in order to avoid incidents like this in the future?

A little planning can go a long way toward minimizing mistakes and getting better results when companies face urgent situations that require immediate action.

With the proliferation of wage and hour litigation, especially in Florida which has the highest number of Fair Labor Standards Act (“FLSA”) cases filed annually nationwide, employers have sought for better ways to track employee work time in anticipation of defending against unpaid overtime claims. Additionally, employers have used monitoring devices in hopes of increasing efficiency, address safety concerns, ensure compliance with company policies, protection of employer-owned property; and for customer service purposes.  One such monitoring method is the implementation of global positioning system (“GPS”) devices on equipment, such as vehicles, cellular phones, laptops, IPADs.

Few courts have addressed the issue of GPS tracking in the employment context, although, most have held that employers may use tracking devices on company-owned equipment, where the employee does not have a reasonable expectation of privacy in its use. Several states, California, Minnesota, Tennessee, and Texas, have laws preventing the use of mobile tracking devices in order to track other individuals.  Common exceptions to these laws include the consent of the owner of the device or vehicle to which a tracking device is attached.

In addition to notice and consent, employers should consider whether employees have a reasonable expectation of privacy when using the equipment on which the GPS device is to be attached or installed.  A balance needs to be considered between the employee’s expectation of privacy, the reasonableness of the intrusion upon that privacy (i.e., being tracked by the employer), and the employer’s legitimate business purpose for utilizing the tracking device. These considerations are heightened when the device is attached to an employee’s personal property or to company owned equipment that the employee uses or transports after work hours and the tracking system continues to record such after-hour usage.

Tracking employees during non-work hours can be an invasion of the employee’s privacy, whether the tracking is done via the employer-owned or employee-owned equipment. When the device tracks non-work time, such as during the evenings, weekends, and when the employee is on vacation, the employer may gain private information about an employee that would be considered an invasion into the employee’s personal privacy.  For example, an employer may find out that an employee travels each day after work to a dialysis center; that the employee has a pattern of visiting gambling facilities; the employee’s travel habits; where and how often the employee shops; the amount of restroom breaks an employee takes during the day; the employee’s eating habits; the employee’s religious service attendance patterns or schedule; etc.  Not only does obtaining and acting upon such information potentially lead to employee claims of an unreasonable invasion of privacy, but could also lead to claims of discrimination or wrongful termination based upon off-duty conduct (where such claims are permitted under state law, such as in New York).

Thus, information collected through GPS monitoring should be focused on an employee’s job performance and disseminated only to employees who have a legitimate business reason for knowing the information. The tracking should be limited to the legitimate business purposes, conducted only during working hours, and provided the company has addressed the employee’s expectation of privacy. Policies should be carefully drafted to explain the legitimate business purpose, circumstances under which monitoring will take place, notice of the company’s right to monitor employee actions while using Company owned property, the GPS monitoring capabilities of the Company-issued property, and that employees should not have an expectation of privacy while using the same.  For employee-owned equipment, employers should have a carefully drafted Bring Your Own Device policy that provides for employee consent for use of the tracking device on the employee’s equipment, and be carefully limited to use only while the employee is working.

As previously reported, in a March 2014 filing titled H.W. v. Sterling High School District, a New Jersey high school student filed suit claiming school officials had violated her constitutional rights when they punished her for content she posted on Twitter which criticized Sterling High School’s principal.

The settlement, which was approved by the Sterling High School District in April and entered by the Court on July 29, 2014, provides that the district will reimburse the student $9,000 for her legal fees.   However, the district will not pay additional damages to the student.  In addition, the school district agreed to revoke punishments imposed against the student for her Twitter postings, expunge documents related to the incident from the student’s academic record, and abandon its attempted requirements for drug testing of the student.  Specifically, the agreement provides that the student is eligible for graduation upon completion of outstanding assignments, is allowed to attend the senior class trip to Florida, and if the student does not seek press coverage or disclose the settlement terms she will be allowed to participate in prom and the graduation ceremony.

Beyond agreements directly between the school district and the student, the settlement also calls of the school to modify its student handbook to specify that administrators “may be monitoring student discussions on Facebook, Twitter or other social media outlets and may seek to impose penalties in accordance with the student code of conduct if such discussions cause a substantial disruption at the school.”

On August 5, 2014, Missouri voters approved Amendment 9 to the Missouri Constitution making Missouri the first state in the nation to offer explicit constitutional protection to electronic communications and data from unreasonable serches and seizures.

The official ballot title asked:  “Shall the Missouri Constitution be amended so that the people shall be secure in their electronic communications and data from unreasonable searches and seizures as they are now likewise secure in their persons, homes, papers and effects?”

The fair ballot language specified:  “A ‘yes’ vote will amend the Missouri Constitution to specify that electronic data and communications have the same protections from unreasonable searches and seizures as persons, papers, homes, and effects.  A ‘no vote will not amend the Missouri Constitution regarding protections for electronic communications and data.”

The measure, which was approved by nearly 75% of voters amended Section 15 of Article I of the Missouri Constitution to read:

That the people shall be secure in their persons, papers, homes, effects, and electronic communications and data, from unreasonable searches and seizures; and no warrant to search any place, or seize any person or thing, or access electronic data or communication, shall issue without describing the place to be searched, or the person or thing to be seized, or the data or communications to be accessed, as nearly as may be; nor without probable cause, supported by written oath or affirmation.

Missouri’s vote comes on the heels of the June 2014 U.S. Supreme Court’s ruling, as covered by CNN, that law enforcement must obtain a warrant to search cell phones seized during arrest.

Given the ruling of the Court, and this first measure by Missouri, it is anticipated that other similar constitutional protections will be extended to electronic communications and data.  Importantly, entities which operate as government contractors and/or entities which may be considered state actors due to their funding, should be aware of these developements to determine what if any potential impact exists for their business.

In what is believed to be the largest security breach to date, the Associated Press reported that Russian hackers have stolen 1.2 billion user names and passwords. According to the AP, Milwaukee security firm, Hold Security, learned of the breach, but has yet to provide details about the series of website hackings believed to have affected 420,000 websites. Citing nondisclosure agreements, Hold Security has not named the hacked websites.

A concern raised by some is the “breach fatigue” that may be created by the continuing stream of news reports about breaches large and small, the notification letters that follow, and the repeated warnings and recommendations to individuals and businesses about addressing data security. This “condition” may be real, but it is a condition individuals and business have to overcome as “big data” and the “internet of things” (IoT) becomes more a part of our lives, creating value in data that criminals want to steal.

A frequent refrain from some, including many small businesses, is that incidents like these will not happen to them. But, as the L.A. Times reports, according to the National Small Business Assn., 44% of survey respondents had been victims of at least one cyberattack. For well over a decade, identity theft continues to be the top crime reported to the FTC. For businesses, the risk is more than whether a breach will happen and how to respond, it is the effects the breach can have on its reputation, the enforcement that increasingly follows these incidents at the federal and state level, and increased litigation including class actions. Late last month, for instance, the Massachusetts Attorney General’s office reported a $150,000 settlement with a local hospital based on allegations of failing to properly safeguard patient data and report the incident.

For many businesses, there are a number of “best practices” that are relatively easy to implement and can have a significant impact on reducing the risks of a data breach. Many say, yes, but where do we start. Logically, the starting point is gaining an understanding of the businesses’ data privacy and security risks – doing a risk and vulnerability assessment. There are a number of resources available to assist in designing and carrying out an assessment. For example, the National Institute of Standards and Technology (NIST) recently issued a draft update of its primary guide to assessing security and privacy controls. While the work NIST does, including this guide, is designed for federal information systems and networks, it is an excellent and comprehensive source for businesses to understand steps they too can take to safeguard their systems and data.

The practical starting point, however, is getting management, C-suite support. Data privacy and security is an enterprise-wide risk which requires an enterprise-wide solution. Like many conditions, left untreated, “breach fatigue” can have significant consequences.

The New York Department of Financial Services recently published proposed regulations which would require virtual currency businesses operating in New York State to safeguard data and protect customer privacy.

Notably, the proposed regulations include requirements for virtual currency business to maintain cyber security programs and business continuity and disaster recovery plans.

Virtual currencies under the regulations include decentralized digital currencies (such as Bitcoin), as well as centrally issued or administered digital currencies and those that can be created by computerized or manufacturing effort (e.g. Bitcoin mining). Virtual currencies would not include digital units used in online gaming platforms that are of no value outside the gaming environment, nor would they include affinity and rewards program points that cannot be converted or redeemed for government issued currency.

Cyber security programs, very similar to written information security programs which we have previously discussed, would be required to be in writing and must ensure the availability and functionality of the business’s electronic systems and to protect those systems and any sensitive data stored on those systems from unauthorized access, use, or tampering. The cyber security program must perform five core cyber security functions:

  1. identify internal and external cyber risks;
  2. protect the business’s electronic systems, and the information stored on those systems, from unauthorized access, use, or other malicious acts;
  3. detect systems intrusions, data breaches, unauthorized access to systems or information, malware, and other Cyber Security Events;
  4. respond to detected Cyber Security Events to mitigate any negative effects; and
  5. recover from Cyber Security Events and restore normal operations and services.

Similarly, the cyber security policy must address the following areas:

  1. information security;
  2. data governance and classification;
  3. access controls;
  4. business continuity and disaster recovery planning and resources;
  5. capacity and performance planning;
  6. systems operations and availability concerns;
  7. systems and network security;
  8. systems and application development and quality assurance;
  9. physical security and environmental controls;
  10. customer data privacy;
  11. vendor and third-party service provider management;
  12. monitoring and implementing changes to core protocols not directly controlled by the business, as applicable; and
  13. incident response.

Some other key provisions of the cyber security program include the identification of a Chief Information Security Officer (“CISO”) — who is responsible for overseeing and implementing the cyber security program and enforcing its cyber security policy — as well as audit functions, which include annual penetration testing of the business’s electronic systems and audit trail systems to track and maintain data.

A 45-day public comment period began upon the publication of the proposed regulations.

As reported by HealthcareInfoSecurity.com, a former hospital employee is facing criminal charges brought by federal prosecutors in Texas for alleged violations of the privacy and security requirements under the Health Insurance Portability and Accountability Act (HIPAA). You may remember that back on June 1, 2005, the Department of Justice issued an opinion supporting the prosecution of individuals under HIPAA’s criminal enforcement provisions.  42 U.S.C. § 1320d-6(b). In 2010, we reported on a doctor in California who was sentenced to four months in prison for snooping into medical records. So, while prosecutions for privacy violations under HIPAA are not common, under certain circumstances individuals can be criminally prosecuted for violating HIPAA.

When is a violation of HIPAA criminal.

In short, a person that knowingly and in violation of the HIPAA rules commits one or more of the following puts himself in jeopardy of criminal prosecution under HIPAA:

  • use or cause to be used a unique health identifier,
  • obtain individually identifiable health information relating to an individual, or
  • disclose individually identifiable health information to another person.

If convicted, the level of punishment depends on the seriousness of the offense:

  • fine of up to $50,000 and/or imprisonment for up to a year for a simple violation
  • fine up to $100,000 and/or imprisonment up to five years if the offense is committed under false pretenses
  • a fine of up to $250,000 and/or imprisonment up to ten years for offenses committed with intent to sell, transfer, or use individually identifiable health information for commercial advantage, personal gain, or malicious harm.

Texas Prosecution

According to the DOJ, the former East Texas hospital employee has been indicted for criminal violations of HIPAA. The individual is being charged with wrongful disclosure of individually identifiable health information. The DOJ alleges that from December 1, 2012, through January 14, 2013, while an employee of the hospital (a HIPAA covered entity), the individual obtained protected health information with the intent to use the information for personal gain. If convicted, the individual faces up to ten years in prison.

Although not common, criminal prosecutions like this one can be an important reminder to workforce members of HIPAA covered entities that violating the HIPAA rules can result in more than the loss of their jobs. Some covered entities inform their employees of the potential for criminal sanctions as part of their new hire and annual trainings.

In response to reported on-going confusion regarding how to satisfy the “verifiable parental consent” requirements in Children’s Online Privacy Protection Act (“COPPA”) 15 U.S.C. §6501 et. seq. (1998), and its implementing regulations, 12 CFR Part 312 (2000), the Federal Trade Commission (“FTC”) revised its guidance on enforcement of the same. According to the FTC, “The primary goal of COPPA is to place parents in control over what information is collected from their young children online. The Rule was designed to protect children under age 13 while accounting for the dynamic nature of the Internet.” The FTC provides interpretive guidance on COPPA and the regulations promulgated under it via Frequently Asked Questions (“FAQs”) on its business center website. FTC revised these FAQs on July 16, 2014.

The revised FAQs generally affirm the FTC’s longstanding position that its list of acceptable methods to obtain verifiable parental consent is not exhaustive. Instead, web-based and mobile application designers are free to use creative methods of verifying parental consent if such consent can be shown to be a “reasonable effort (taking into consideration available technology) . . . to ensure that a parent of a child receives notice of the operator’s personal information collection, use, and disclosure practices, and authorizes the collection, use, and disclosure, as applicable, of personal information and the subsequent use of that information before that information is collected from that child.” 15 U.S.C. § 6501(9).

So, what’s different under the new guidance?

When Parental Credit Card Data is and is not Sufficient under the Rule

The FTC confirmed and expounded upon its prior position that charging a parental credit card is sufficient to satisfy the rule—the parent will, at the very least, see the charge on their monthly statement and thus have notice of the child’s visit to the site. Merely gathering credit card information from a parent, without charging the card, is insufficient to satisfy the rule, however. That said, credit card information can be combined with other information—such as questions to which only parents would know the answer, or parent contact information—to meet the verifiable parental consent requirement.

Don’t Look at Us, Look at the App Store.

The FTC also clarified its guidance regarding parental consent for mobile applications given via an applications store. Much the same way a charge to a parental credit card is sufficient, so too can an application store account be used as a COPPA-compliant parental consent method. For example, if the application store provides the required noticed and consent verification prior to, or at the time of, the purchase of a mobile app marketed to children under 13, the mobile application developer can rely upon that consent.

Multiple Platform Consents.

The application store can also multi-task when it comes to obtaining COPPA consents. Application stores can now create multiple platform COPPA consent mechanisms. This consent function can satisfy the COPPA consent requirements for multiple mobile application developers. And—enterprising start-ups pay attention—providing this software service solution for mobile application providers does not create liability for the third party application store or software company that builds the solution.

This flexibility for mobile developers is intended to open up space in the mobile application development market while still meeting the FTC’s goal of keeping parents in control of what their under 13 kids are viewing and disclosing on the Internet.