On August 5, 2014, Missouri voters approved Amendment 9 to the Missouri Constitution making Missouri the first state in the nation to offer explicit constitutional protection to electronic communications and data from unreasonable serches and seizures.

The official ballot title asked:  “Shall the Missouri Constitution be amended so that the people shall be secure in their electronic communications and data from unreasonable searches and seizures as they are now likewise secure in their persons, homes, papers and effects?”

The fair ballot language specified:  “A ‘yes’ vote will amend the Missouri Constitution to specify that electronic data and communications have the same protections from unreasonable searches and seizures as persons, papers, homes, and effects.  A ‘no vote will not amend the Missouri Constitution regarding protections for electronic communications and data.”

The measure, which was approved by nearly 75% of voters amended Section 15 of Article I of the Missouri Constitution to read:

That the people shall be secure in their persons, papers, homes, effects, and electronic communications and data, from unreasonable searches and seizures; and no warrant to search any place, or seize any person or thing, or access electronic data or communication, shall issue without describing the place to be searched, or the person or thing to be seized, or the data or communications to be accessed, as nearly as may be; nor without probable cause, supported by written oath or affirmation.

Missouri’s vote comes on the heels of the June 2014 U.S. Supreme Court’s ruling, as covered by CNN, that law enforcement must obtain a warrant to search cell phones seized during arrest.

Given the ruling of the Court, and this first measure by Missouri, it is anticipated that other similar constitutional protections will be extended to electronic communications and data.  Importantly, entities which operate as government contractors and/or entities which may be considered state actors due to their funding, should be aware of these developements to determine what if any potential impact exists for their business.

In what is believed to be the largest security breach to date, the Associated Press reported that Russian hackers have stolen 1.2 billion user names and passwords. According to the AP, Milwaukee security firm, Hold Security, learned of the breach, but has yet to provide details about the series of website hackings believed to have affected 420,000 websites. Citing nondisclosure agreements, Hold Security has not named the hacked websites.

A concern raised by some is the “breach fatigue” that may be created by the continuing stream of news reports about breaches large and small, the notification letters that follow, and the repeated warnings and recommendations to individuals and businesses about addressing data security. This “condition” may be real, but it is a condition individuals and business have to overcome as “big data” and the “internet of things” (IoT) becomes more a part of our lives, creating value in data that criminals want to steal.

A frequent refrain from some, including many small businesses, is that incidents like these will not happen to them. But, as the L.A. Times reports, according to the National Small Business Assn., 44% of survey respondents had been victims of at least one cyberattack. For well over a decade, identity theft continues to be the top crime reported to the FTC. For businesses, the risk is more than whether a breach will happen and how to respond, it is the effects the breach can have on its reputation, the enforcement that increasingly follows these incidents at the federal and state level, and increased litigation including class actions. Late last month, for instance, the Massachusetts Attorney General’s office reported a $150,000 settlement with a local hospital based on allegations of failing to properly safeguard patient data and report the incident.

For many businesses, there are a number of “best practices” that are relatively easy to implement and can have a significant impact on reducing the risks of a data breach. Many say, yes, but where do we start. Logically, the starting point is gaining an understanding of the businesses’ data privacy and security risks – doing a risk and vulnerability assessment. There are a number of resources available to assist in designing and carrying out an assessment. For example, the National Institute of Standards and Technology (NIST) recently issued a draft update of its primary guide to assessing security and privacy controls. While the work NIST does, including this guide, is designed for federal information systems and networks, it is an excellent and comprehensive source for businesses to understand steps they too can take to safeguard their systems and data.

The practical starting point, however, is getting management, C-suite support. Data privacy and security is an enterprise-wide risk which requires an enterprise-wide solution. Like many conditions, left untreated, “breach fatigue” can have significant consequences.

The New York Department of Financial Services recently published proposed regulations which would require virtual currency businesses operating in New York State to safeguard data and protect customer privacy.

Notably, the proposed regulations include requirements for virtual currency business to maintain cyber security programs and business continuity and disaster recovery plans.

Virtual currencies under the regulations include decentralized digital currencies (such as Bitcoin), as well as centrally issued or administered digital currencies and those that can be created by computerized or manufacturing effort (e.g. Bitcoin mining). Virtual currencies would not include digital units used in online gaming platforms that are of no value outside the gaming environment, nor would they include affinity and rewards program points that cannot be converted or redeemed for government issued currency.

Cyber security programs, very similar to written information security programs which we have previously discussed, would be required to be in writing and must ensure the availability and functionality of the business’s electronic systems and to protect those systems and any sensitive data stored on those systems from unauthorized access, use, or tampering. The cyber security program must perform five core cyber security functions:

  1. identify internal and external cyber risks;
  2. protect the business’s electronic systems, and the information stored on those systems, from unauthorized access, use, or other malicious acts;
  3. detect systems intrusions, data breaches, unauthorized access to systems or information, malware, and other Cyber Security Events;
  4. respond to detected Cyber Security Events to mitigate any negative effects; and
  5. recover from Cyber Security Events and restore normal operations and services.

Similarly, the cyber security policy must address the following areas:

  1. information security;
  2. data governance and classification;
  3. access controls;
  4. business continuity and disaster recovery planning and resources;
  5. capacity and performance planning;
  6. systems operations and availability concerns;
  7. systems and network security;
  8. systems and application development and quality assurance;
  9. physical security and environmental controls;
  10. customer data privacy;
  11. vendor and third-party service provider management;
  12. monitoring and implementing changes to core protocols not directly controlled by the business, as applicable; and
  13. incident response.

Some other key provisions of the cyber security program include the identification of a Chief Information Security Officer (“CISO”) — who is responsible for overseeing and implementing the cyber security program and enforcing its cyber security policy — as well as audit functions, which include annual penetration testing of the business’s electronic systems and audit trail systems to track and maintain data.

A 45-day public comment period began upon the publication of the proposed regulations.

As reported by HealthcareInfoSecurity.com, a former hospital employee is facing criminal charges brought by federal prosecutors in Texas for alleged violations of the privacy and security requirements under the Health Insurance Portability and Accountability Act (HIPAA). You may remember that back on June 1, 2005, the Department of Justice issued an opinion supporting the prosecution of individuals under HIPAA’s criminal enforcement provisions.  42 U.S.C. § 1320d-6(b). In 2010, we reported on a doctor in California who was sentenced to four months in prison for snooping into medical records. So, while prosecutions for privacy violations under HIPAA are not common, under certain circumstances individuals can be criminally prosecuted for violating HIPAA.

When is a violation of HIPAA criminal.

In short, a person that knowingly and in violation of the HIPAA rules commits one or more of the following puts himself in jeopardy of criminal prosecution under HIPAA:

  • use or cause to be used a unique health identifier,
  • obtain individually identifiable health information relating to an individual, or
  • disclose individually identifiable health information to another person.

If convicted, the level of punishment depends on the seriousness of the offense:

  • fine of up to $50,000 and/or imprisonment for up to a year for a simple violation
  • fine up to $100,000 and/or imprisonment up to five years if the offense is committed under false pretenses
  • a fine of up to $250,000 and/or imprisonment up to ten years for offenses committed with intent to sell, transfer, or use individually identifiable health information for commercial advantage, personal gain, or malicious harm.

Texas Prosecution

According to the DOJ, the former East Texas hospital employee has been indicted for criminal violations of HIPAA. The individual is being charged with wrongful disclosure of individually identifiable health information. The DOJ alleges that from December 1, 2012, through January 14, 2013, while an employee of the hospital (a HIPAA covered entity), the individual obtained protected health information with the intent to use the information for personal gain. If convicted, the individual faces up to ten years in prison.

Although not common, criminal prosecutions like this one can be an important reminder to workforce members of HIPAA covered entities that violating the HIPAA rules can result in more than the loss of their jobs. Some covered entities inform their employees of the potential for criminal sanctions as part of their new hire and annual trainings.

In response to reported on-going confusion regarding how to satisfy the “verifiable parental consent” requirements in Children’s Online Privacy Protection Act (“COPPA”) 15 U.S.C. §6501 et. seq. (1998), and its implementing regulations, 12 CFR Part 312 (2000), the Federal Trade Commission (“FTC”) revised its guidance on enforcement of the same. According to the FTC, “The primary goal of COPPA is to place parents in control over what information is collected from their young children online. The Rule was designed to protect children under age 13 while accounting for the dynamic nature of the Internet.” The FTC provides interpretive guidance on COPPA and the regulations promulgated under it via Frequently Asked Questions (“FAQs”) on its business center website. FTC revised these FAQs on July 16, 2014.

The revised FAQs generally affirm the FTC’s longstanding position that its list of acceptable methods to obtain verifiable parental consent is not exhaustive. Instead, web-based and mobile application designers are free to use creative methods of verifying parental consent if such consent can be shown to be a “reasonable effort (taking into consideration available technology) . . . to ensure that a parent of a child receives notice of the operator’s personal information collection, use, and disclosure practices, and authorizes the collection, use, and disclosure, as applicable, of personal information and the subsequent use of that information before that information is collected from that child.” 15 U.S.C. § 6501(9).

So, what’s different under the new guidance?

When Parental Credit Card Data is and is not Sufficient under the Rule

The FTC confirmed and expounded upon its prior position that charging a parental credit card is sufficient to satisfy the rule—the parent will, at the very least, see the charge on their monthly statement and thus have notice of the child’s visit to the site. Merely gathering credit card information from a parent, without charging the card, is insufficient to satisfy the rule, however. That said, credit card information can be combined with other information—such as questions to which only parents would know the answer, or parent contact information—to meet the verifiable parental consent requirement.

Don’t Look at Us, Look at the App Store.

The FTC also clarified its guidance regarding parental consent for mobile applications given via an applications store. Much the same way a charge to a parental credit card is sufficient, so too can an application store account be used as a COPPA-compliant parental consent method. For example, if the application store provides the required noticed and consent verification prior to, or at the time of, the purchase of a mobile app marketed to children under 13, the mobile application developer can rely upon that consent.

Multiple Platform Consents.

The application store can also multi-task when it comes to obtaining COPPA consents. Application stores can now create multiple platform COPPA consent mechanisms. This consent function can satisfy the COPPA consent requirements for multiple mobile application developers. And—enterprising start-ups pay attention—providing this software service solution for mobile application providers does not create liability for the third party application store or software company that builds the solution.

This flexibility for mobile developers is intended to open up space in the mobile application development market while still meeting the FTC’s goal of keeping parents in control of what their under 13 kids are viewing and disclosing on the Internet.

When the United States Supreme Court handed down its decision Riley v. California, a Fourth Amendment criminal case, we suspected it would not be long before the rationale in that case concerning the privacy interests individuals have in cellphones would be more broadly applied. In late June, a federal district court in Connecticut denied a request  by two former employees to inspect six years of cellphone data for ten other employees on cellphones that either were provided or paid for by the employer. Bakhit v. Safety Marking, Inc., D. Conn., No. 3:13-CV-1049, June 26, 2014. The plaintiffs were interested in text messages, e-mails, and other information and data, including metadata, that might provide evidence of racial and other discrimination.

Justice Robert’s language in Riley raises interesting parallels in the civil context when thinking about cellphone and mobile device privacy and security, particularly in an environment of more widespread use of “Bring Your Own Device” (BYOD) and cloud computing platforms. For example, the decision acknowledges:

Cell phones differ in both a quantitative and a qualitative sense from other objects that might be kept on an arrestee’s person. The term “cell phone” is itself misleading shorthand; many of these devices are in fact minicomputers that also happen to have the capacity to be used as a telephone. They could just as easily be called cameras, video players, rolodexes, calendars, tape recorders, libraries, diaries, albums, televisions, maps, or newspapers…

An Internet search and browsing history, for example, can be found on an Internet-enabled phone and could reveal an individual’ s private interests or concerns—perhaps a search for certain symptoms of disease, coupled with frequent visits to WebMD. Data on a cell phone can also reveal where a person has been…

Mobile application software on a cell phone, or “apps” offer a range of tools for managing detailed information about all aspects of a person’s life. There are apps for Democratic Party news and Republican Party news; apps for alcohol, drug, and gambling addictions; apps for sharing prayer requests; apps for tracking pregnancy symptoms; apps for planning your budget; apps for every conceivable hobby or pastime; apps for improving your romantic life. There are popular apps for buying or selling just about anything, and the records of such transactions may be accessible on the phone indefinitely. There are over a million apps available in each of the two major app stores; the phrase “there’s an app for that” is now part of the popular lexicon. The average smart phone user has installed 33 apps, which together can form a revealing montage of the user’s life.

Citing some of the same language quoted above, Magistrate Judge Holly B. Fitzsimmons found the Supreme Court’s observations about cellphone technology and privacy interests reinforced her own conclusions that the request by the plaintiffs in her case were overbroad, and failed to exhaust other options to obtain similar information.

Businesses encounter a number of risks when they monitor and search devices used by employees, whether those devices are owned by the company or the employee. The acknowledgement by the Supreme Court of the unique nature of today’s smart communications devices has begun to heighten the scrutiny with which courts examine access to these devices, whether by other employees or employers. Surely, employers should be thinking more carefully about the nature and extent of searches they may conduct on these devices, but also whether their policies are drafted clearly enough to alert employees of the potential scope of such searches and the level of privacy employees can expect.

As I write this post, the U.S. v. Belgium match is underway – a win is needed by the United States to advance to the quarterfinals of the 2014 World Cup. Most watching the game may not realize that GPS technology will be monitoring just about every movement taken by U.S. players on the field as well as other metrics, as reported by Bloomberg. According to the report, the team’s medical staff uses matchbox-sized GPS tracking devices with the goal (no pun intended) of keeping players free from injury. Of course, the technology is used for purposes other than injury prevention; coaches can use it to adjust strategies based on positioning and endurance measured through the devices.

So, if this technology can be effective to minimize injury and improve productivity on the soccer (futbol) field, can we expect to see more widespread use, say in the workplace? Feel free to comment below.

Clearly there are many issues to be considered by employers, many of which we have covered in this forum, including the power of “Big Data” analytics tools to process the vast amounts of data that can be captured with this technology.

But for now, enjoy the game. Go USA!

As we reported earlier, Florida lawmakers passed extensive revisions to its existing data breach notification law, SB 1524. On June 20, 2014, Florida’s Governor Rick Scott signed the bill into law, which becomes effective on July 1, 2014.

Our earlier post provides more of a discussion about key provisions of the law. But here are a few reminders:

  • The law adds to the definition of “personal information” an individual’s user name or e-mail address in combination with a password or security question and answer that would permit access to an online account.
  • Individuals must be notified of a breach as expeditiously as possible, but no later than thirty (30) days from discovery of the breach when the individual’s personal information was or the covered entity reasonably believes it was accessed as a result of a breach.
  • If the breach affects 500 or more Floridians, the state’s Attorney General must be notified no later than thirty (30) days after the determination that a breach has occurred or reason to believe one occurred. Current Attorney General Pam Bondi has promised greater enforcement. Note also that under the new law the Attorney General may require covered entities to provide copies of their policies regarding breaches, steps taken to rectify the breach, and a police report, incident report, or computer forensics report.
  • The law also imposes a statutory requirement to safeguard personal information. So, as in a number of other states such as California, Connecticut, Maryland, Massachusetts, and Oregon, businesses in Florida (and possibly businesses outside of the Sunshine State) that maintain personal information about Florida residents should take steps to be sure they have reasonable policies and procedures in writing to safeguard such information.

The last couple of times I passed by the TV to see what the kids were watching, I was surprised not to see Spongebob Squarepants or the Yankee game (Michael and Grace have their separate interests, but they usually can agree on something, at least in the short term). Anyway, they happened to be intently watching the Food Network show Restaurant Stakeout. You may know the show – Willie Degel, host and restaurateur, uses a myriad of cameras to monitor how restaurant owners run their businesses in order to critique their management styles and hopefully improve their businesses. I sat down to watch.

Mr. Degel is no doubt an exciting and informative host, but it was not his management advice (which may have been very good) that kept me watching. The level of monitoring employed by the show is indicative of the growing level of surveillance going on at workplaces across the country. But as the New York Times reported on Saturday, and as we have reported previously, monitoring is not limited to fancy cameras tilting and panning. According to the Times’ story, through complex workplace analytics

companies have found, for example, that workers are more productive if they have more social interaction. So a bank’s call center introduced a shared 15-minute coffee break, and a pharmaceutical company replaced coffee makers used by a few marketing workers with a larger cafe area. The result? Increased sales and less turnover.

Of course, privacy and data security concerns exist, but there generally are few current insurmountable legal obstacles in most of the United States if proper steps are taken, such as notifying employees and customers, and managing the data carefully. Still, many may have an uncomfortable feeling about this level of monitoring, although the Times report about restaurant servers didn’t say the servers quit. Instead those that knew “they were being monitored, pushed customers to have that dessert or a second beer, which resulted in the increased revenue for the restaurant and tips for themselves.”

Saying there are few legal obstacles may be a bit premature, however, as courts often struggle to keep pace with technology. How decisions are made concerning employees (and applicants) with vast amounts of data acquired through such monitoring and analytics could be a significant area of legal risk. Our prior report highlighted some others. Clearly, companies need to be prudent when deciding whether and how to implement such technologies, but they need not run from them as the opportunity costs and other costs for not adopting and learning from these technologies can be far greater than the costs or exposures incurred from adopting them.

These software marvels that track and analyze all aspects of a workplace to find those who are most productive, or how to make those less productive more productive, may soon have a dramatic impact on the workplace. And, it’s unclear whether they will replace the experience and grit of the Willie Degels of the world, but both my kids and I hope they don’t!

As reported by ESPN, Jimmy Graham‘s Twitter bio could play a crucial role in the National Football League (“NFL”) arbitration hearing between the New Orleans Saints and Graham.

For those unfamiliar with the story, the New Orleans Saints placed a tight-end franchise tag on Graham.  Under the tag, Graham must be offered a one-year contract for an amount no less than the average of the top five salaries at the player’s position for the previous year, or 120 percent of the player’s previous year’s salary, whichever is greater.  By utilizing a tight-end tag, the one year contract for Graham would be $7.035 million.  However, in response to the tag, Graham filed a grievance arguing that he’s more deserving of the significantly larger $12.312 million franchise tag for wide receivers.

Graham is one of the premier pass catching players in the NFL, and he argues he should be considered a receiver because he lined up as a receiver 67% of the time last season.  In response, the Saints argue that Graham was drafted as a tight-end and made the Pro Bowl at that position.   Notably, the Saints arguments for designating Graham as a tight-end also include the fact that Graham’s own Twitter bio lists Graham as a tight-end.

This appears to be the first instance where a social media bio is being utilized in this way during an NFL grievance hearing, and while the ultimate outcome of the hearing is unknown, the Saints’ arguments further highlight the role social media plays in today’s workplace.