Header graphic for print

Workplace Privacy, Data Management & Security Report

Yes, a Person Can be Criminally Prosecuted for Violating HIPAA

As reported by HealthcareInfoSecurity.com, a former hospital employee is facing criminal charges brought by federal prosecutors in Texas for alleged violations of the privacy and security requirements under the Health Insurance Portability and Accountability Act (HIPAA). You may remember that back on June 1, 2005, the Department of Justice issued an opinion supporting the prosecution of individuals under HIPAA’s criminal enforcement provisions.  42 U.S.C. § 1320d-6(b). In 2010, we reported on a doctor in California who was sentenced to four months in prison for snooping into medical records. So, while prosecutions for privacy violations under HIPAA are not common, under certain circumstances individuals can be criminally prosecuted for violating HIPAA.

When is a violation of HIPAA criminal.

In short, a person that knowingly and in violation of the HIPAA rules commits one or more of the following puts himself in jeopardy of criminal prosecution under HIPAA:

  • use or cause to be used a unique health identifier,
  • obtain individually identifiable health information relating to an individual, or
  • disclose individually identifiable health information to another person.

If convicted, the level of punishment depends on the seriousness of the offense:

  • fine of up to $50,000 and/or imprisonment for up to a year for a simple violation
  • fine up to $100,000 and/or imprisonment up to five years if the offense is committed under false pretenses
  • a fine of up to $250,000 and/or imprisonment up to ten years for offenses committed with intent to sell, transfer, or use individually identifiable health information for commercial advantage, personal gain, or malicious harm.

Texas Prosecution

According to the DOJ, the former East Texas hospital employee has been indicted for criminal violations of HIPAA. The individual is being charged with wrongful disclosure of individually identifiable health information. The DOJ alleges that from December 1, 2012, through January 14, 2013, while an employee of the hospital (a HIPAA covered entity), the individual obtained protected health information with the intent to use the information for personal gain. If convicted, the individual faces up to ten years in prison.

Although not common, criminal prosecutions like this one can be an important reminder to workforce members of HIPAA covered entities that violating the HIPAA rules can result in more than the loss of their jobs. Some covered entities inform their employees of the potential for criminal sanctions as part of their new hire and annual trainings.

FTC Amends Guidance to Children’s Online Privacy Protection Act (COPPA) Rules, Clarifying “Verifiable Parental Consent” Requirements

Written by Amy Worley

In response to reported on-going confusion regarding how to satisfy the “verifiable parental consent” requirements in Children’s Online Privacy Protection Act (“COPPA”) 15 U.S.C. §6501 et. seq. (1998), and its implementing regulations, 12 CFR Part 312 (2000), the Federal Trade Commission (“FTC”) revised its guidance on enforcement of the same. According to the FTC, “The primary goal of COPPA is to place parents in control over what information is collected from their young children online. The Rule was designed to protect children under age 13 while accounting for the dynamic nature of the Internet.” The FTC provides interpretive guidance on COPPA and the regulations promulgated under it via Frequently Asked Questions (“FAQs”) on its business center website. FTC revised these FAQs on July 16, 2014.

The revised FAQs generally affirm the FTC’s longstanding position that its list of acceptable methods to obtain verifiable parental consent is not exhaustive. Instead, web-based and mobile application designers are free to use creative methods of verifying parental consent if such consent can be shown to be a “reasonable effort (taking into consideration available technology) . . . to ensure that a parent of a child receives notice of the operator’s personal information collection, use, and disclosure practices, and authorizes the collection, use, and disclosure, as applicable, of personal information and the subsequent use of that information before that information is collected from that child.” 15 U.S.C. § 6501(9).

So, what’s different under the new guidance?

When Parental Credit Card Data is and is not Sufficient under the Rule

The FTC confirmed and expounded upon its prior position that charging a parental credit card is sufficient to satisfy the rule—the parent will, at the very least, see the charge on their monthly statement and thus have notice of the child’s visit to the site. Merely gathering credit card information from a parent, without charging the card, is insufficient to satisfy the rule, however. That said, credit card information can be combined with other information—such as questions to which only parents would know the answer, or parent contact information—to meet the verifiable parental consent requirement.

Don’t Look at Us, Look at the App Store.

The FTC also clarified its guidance regarding parental consent for mobile applications given via an applications store. Much the same way a charge to a parental credit card is sufficient, so too can an application store account be used as a COPPA-compliant parental consent method. For example, if the application store provides the required noticed and consent verification prior to, or at the time of, the purchase of a mobile app marketed to children under 13, the mobile application developer can rely upon that consent.

Multiple Platform Consents.

The application store can also multi-task when it comes to obtaining COPPA consents. Application stores can now create multiple platform COPPA consent mechanisms. This consent function can satisfy the COPPA consent requirements for multiple mobile application developers. And—enterprising start-ups pay attention—providing this software service solution for mobile application providers does not create liability for the third party application store or software company that builds the solution.

This flexibility for mobile developers is intended to open up space in the mobile application development market while still meeting the FTC’s goal of keeping parents in control of what their under 13 kids are viewing and disclosing on the Internet.

Supreme Court Decision in Riley Affects Cellphone Searches in Civil Litigation, Employment Matters

When the United States Supreme Court handed down its decision Riley v. California, a Fourth Amendment criminal case, we suspected it would not be long before the rationale in that case concerning the privacy interests individuals have in cellphones would be more broadly applied. In late June, a federal district court in Connecticut denied a request  by two former employees to inspect six years of cellphone data for ten other employees on cellphones that either were provided or paid for by the employer. Bakhit v. Safety Marking, Inc., D. Conn., No. 3:13-CV-1049, June 26, 2014. The plaintiffs were interested in text messages, e-mails, and other information and data, including metadata, that might provide evidence of racial and other discrimination.

Justice Robert’s language in Riley raises interesting parallels in the civil context when thinking about cellphone and mobile device privacy and security, particularly in an environment of more widespread use of “Bring Your Own Device” (BYOD) and cloud computing platforms. For example, the decision acknowledges:

Cell phones differ in both a quantitative and a qualitative sense from other objects that might be kept on an arrestee’s person. The term “cell phone” is itself misleading shorthand; many of these devices are in fact minicomputers that also happen to have the capacity to be used as a telephone. They could just as easily be called cameras, video players, rolodexes, calendars, tape recorders, libraries, diaries, albums, televisions, maps, or newspapers…

An Internet search and browsing history, for example, can be found on an Internet-enabled phone and could reveal an individual’ s private interests or concerns—perhaps a search for certain symptoms of disease, coupled with frequent visits to WebMD. Data on a cell phone can also reveal where a person has been…

Mobile application software on a cell phone, or “apps” offer a range of tools for managing detailed information about all aspects of a person’s life. There are apps for Democratic Party news and Republican Party news; apps for alcohol, drug, and gambling addictions; apps for sharing prayer requests; apps for tracking pregnancy symptoms; apps for planning your budget; apps for every conceivable hobby or pastime; apps for improving your romantic life. There are popular apps for buying or selling just about anything, and the records of such transactions may be accessible on the phone indefinitely. There are over a million apps available in each of the two major app stores; the phrase “there’s an app for that” is now part of the popular lexicon. The average smart phone user has installed 33 apps, which together can form a revealing montage of the user’s life.

Citing some of the same language quoted above, Magistrate Judge Holly B. Fitzsimmons found the Supreme Court’s observations about cellphone technology and privacy interests reinforced her own conclusions that the request by the plaintiffs in her case were overbroad, and failed to exhaust other options to obtain similar information.

Businesses encounter a number of risks when they monitor and search devices used by employees, whether those devices are owned by the company or the employee. The acknowledgement by the Supreme Court of the unique nature of today’s smart communications devices has begun to heighten the scrutiny with which courts examine access to these devices, whether by other employees or employers. Surely, employers should be thinking more carefully about the nature and extent of searches they may conduct on these devices, but also whether their policies are drafted clearly enough to alert employees of the potential scope of such searches and the level of privacy employees can expect.

USA Soccer Team Players Monitored by GPS to Reduce Injury and Improve Productivity…a Tool for the Workplace?

As I write this post, the U.S. v. Belgium match is underway - a win is needed by the United States to advance to the quarterfinals of the 2014 World Cup. Most watching the game may not realize that GPS technology will be monitoring just about every movement taken by U.S. players on the field as well as other metrics, as reported by Bloomberg. According to the report, the team’s medical staff uses matchbox-sized GPS tracking devices with the goal (no pun intended) of keeping players free from injury. Of course, the technology is used for purposes other than injury prevention; coaches can use it to adjust strategies based on positioning and endurance measured through the devices.

So, if this technology can be effective to minimize injury and improve productivity on the soccer (futbol) field, can we expect to see more widespread use, say in the workplace? Feel free to comment below.

Clearly there are many issues to be considered by employers, many of which we have covered in this forum, including the power of “Big Data” analytics tools to process the vast amounts of data that can be captured with this technology.

But for now, enjoy the game. Go USA!

Strengthened Florida Data Breach Notification Law Signed by Governor Scott

As we reported earlier, Florida lawmakers passed extensive revisions to its existing data breach notification law, SB 1524. On June 20, 2014, Florida’s Governor Rick Scott signed the bill into law, which becomes effective on July 1, 2014.

Our earlier post provides more of a discussion about key provisions of the law. But here are a few reminders:

  • The law adds to the definition of “personal information” an individual’s user name or e-mail address in combination with a password or security question and answer that would permit access to an online account.
  • Individuals must be notified of a breach as expeditiously as possible, but no later than thirty (30) days from discovery of the breach when the individual’s personal information was or the covered entity reasonably believes it was accessed as a result of a breach.
  • If the breach affects 500 or more Floridians, the state’s Attorney General must be notified no later than thirty (30) days after the determination that a breach has occurred or reason to believe one occurred. Current Attorney General Pam Bondi has promised greater enforcement. Note also that under the new law the Attorney General may require covered entities to provide copies of their policies regarding breaches, steps taken to rectify the breach, and a police report, incident report, or computer forensics report.
  • The law also imposes a statutory requirement to safeguard personal information. So, as in a number of other states such as California, Connecticut, Maryland, Massachusetts, and Oregon, businesses in Florida (and possibly businesses outside of the Sunshine State) that maintain personal information about Florida residents should take steps to be sure they have reasonable policies and procedures in writing to safeguard such information.

Restaurant Stakeout: A Sign of the Times for Workplace Monitoring?

The last couple of times I passed by the TV to see what the kids were watching, I was surprised not to see Spongebob Squarepants or the Yankee game (Michael and Grace have their separate interests, but they usually can agree on something, at least in the short term). Anyway, they happened to be intently watching the Food Network show Restaurant Stakeout. You may know the show – Willie Degel, host and restaurateur, uses a myriad of cameras to monitor how restaurant owners run their businesses in order to critique their management styles and hopefully improve their businesses. I sat down to watch.

Mr. Degel is no doubt an exciting and informative host, but it was not his management advice (which may have been very good) that kept me watching. The level of monitoring employed by the show is indicative of the growing level of surveillance going on at workplaces across the country. But as the New York Times reported on Saturday, and as we have reported previously, monitoring is not limited to fancy cameras tilting and panning. According to the Times’ story, through complex workplace analytics

companies have found, for example, that workers are more productive if they have more social interaction. So a bank’s call center introduced a shared 15-minute coffee break, and a pharmaceutical company replaced coffee makers used by a few marketing workers with a larger cafe area. The result? Increased sales and less turnover.

Of course, privacy and data security concerns exist, but there generally are few current insurmountable legal obstacles in most of the United States if proper steps are taken, such as notifying employees and customers, and managing the data carefully. Still, many may have an uncomfortable feeling about this level of monitoring, although the Times report about restaurant servers didn’t say the servers quit. Instead those that knew ”they were being monitored, pushed customers to have that dessert or a second beer, which resulted in the increased revenue for the restaurant and tips for themselves.”

Saying there are few legal obstacles may be a bit premature, however, as courts often struggle to keep pace with technology. How decisions are made concerning employees (and applicants) with vast amounts of data acquired through such monitoring and analytics could be a significant area of legal risk. Our prior report highlighted some others. Clearly, companies need to be prudent when deciding whether and how to implement such technologies, but they need not run from them as the opportunity costs and other costs for not adopting and learning from these technologies can be far greater than the costs or exposures incurred from adopting them.

These software marvels that track and analyze all aspects of a workplace to find those who are most productive, or how to make those less productive more productive, may soon have a dramatic impact on the workplace. And, it’s unclear whether they will replace the experience and grit of the Willie Degels of the world, but both my kids and I hope they don’t!

Twitter Bio At Issue In NFL Arbitration

As reported by ESPN, Jimmy Graham‘s Twitter bio could play a crucial role in the National Football League (“NFL”) arbitration hearing between the New Orleans Saints and Graham.

For those unfamiliar with the story, the New Orleans Saints placed a tight-end franchise tag on Graham.  Under the tag, Graham must be offered a one-year contract for an amount no less than the average of the top five salaries at the player’s position for the previous year, or 120 percent of the player’s previous year’s salary, whichever is greater.  By utilizing a tight-end tag, the one year contract for Graham would be $7.035 million.  However, in response to the tag, Graham filed a grievance arguing that he’s more deserving of the significantly larger $12.312 million franchise tag for wide receivers.

Graham is one of the premier pass catching players in the NFL, and he argues he should be considered a receiver because he lined up as a receiver 67% of the time last season.  In response, the Saints argue that Graham was drafted as a tight-end and made the Pro Bowl at that position.   Notably, the Saints arguments for designating Graham as a tight-end also include the fact that Graham’s own Twitter bio lists Graham as a tight-end.

This appears to be the first instance where a social media bio is being utilized in this way during an NFL grievance hearing, and while the ultimate outcome of the hearing is unknown, the Saints’ arguments further highlight the role social media plays in today’s workplace.

Prepare For Increased HIPAA Fines

Since mid-2013, the Department of Health and Human Services has recovered more than $10 million from numerous entities in connection with alleged violations of the Health Insurance Portability and Accountability Act (“HIPAA”).  However, during a recent American Bar Association conference, Jerome B. Meites, a chief regional civil rights counsel at the Department of Health and Human Services (“HHS”) told attendees he expects the past 12 months of enforcement to pale in comparison to the next 12 months.  According to Mr. Meites, HHS’ Office of Civil Rights (“OCR”) desires to send a strong message to the industry through high-impact cases.

In addition to the anticipated increase in fines, Mr. Meites also said that the OCR still expects to begin conducting new rounds of HIPAA audits later this year on some of the 1,200 companies that were identified earlier this year as potential audit candidates.  These 1,200 companies include approximately 800 covered entities (health care providers, insurers, or clearinghouses) and about 400 business associates.

Mr. Meites also made two extremely pertinent comments concerning HIPAA compliance.  Specifically, he said that portable media devices have caused an enormous number of the complaints that the OCR deals with and that an entity’s failure to perform a comprehensive risk assessment, as required by HIPAA, has factored into most of the data breach cases which resulted in financial settlements.

Entities subject to HIPAA’s requirements need to be conscious of not only the planned aggressive punishment related to privacy breaches and security lapses, but also the OCR’s extensive audit strategy.   However, simply knowing that such plans are in place is not enough, and entities subject to HIPAA should begin to examine their own policies and practices and make changes as needed to address these issues.

The K5 Autonomous Data Machine Might Soon Be Securing and Monitoring Your Business

Developed by Knightscope, the K5 Autonomous Data Machine is a 5 foot tall, 300 pound robotic device designed to be “a safety and security tool for corporations, as well as for schools and neighborhoods,” as reported by the New York Times. While K5 may not yet be ready for prime time, its developers are hoping to lure early adopters at “technology companies that employ large security forces to protect their sprawling campuses.” Eventually, K5 could be used to roam city streets, schools, shopping centers, and, yes, workplaces.

According to the Times, K5 will be equipped with a “video camera, thermal imaging sensors, a laser range finder, radar, air quality sensors and a microphone.” The stated mission of the developers is to reduce crime by 50%. They explain that data collected through K5′s sensors is “processed through our predictive analytics engine, combined with existing business, government and crowdsourced social data sets, and subsequently assigned an alert level that determines when the community and the authorities should be notified of a concern.” It is not a stretch to think that the device’s capabilities could be modified to address different applications.

Some are raising concerns that this and similar devices will take jobs away from the private security guard industry. Others believe K-5 will only add to “big brother”-type surveillance that continues to erode personal privacy. Just this week, New York City Mayor Bill de Blasio announced a substantial increase in surveillance cameras to be installed in some of the City’s public housing developments. In many settings, concerns about security are winning over concerns about privacy. Consider the assisted living, nursing home business where patient and resident abuse is driving a greater need for security at the expense of privacy and despite the need for added compliance measures under HIPAA.

K5 raises additional issues for the workplace. Having K-5 roaming around retail space, office space, common areas and so on, even if only intended to address security concerns of the business, could trigger a number of unintended consequences for employers and their employees. K-5 might capture evidence of employee negligence in connection with treatment of customers or patients. Capable of audio and video recording, K-5 could record conversations between employees, between employees and supervisors, between employees and family members and other communications that raise workplace privacy and other issues. For example, capturing a conversation between an employee and her spouse about care for their child suffering from a disease could raise issues under the Genetic Information Nondiscrimination Act. Of course, recordings like these, which could include recording of communications between employees and customers or patients, could be made without first obtaining the consent of one or all parties to the conversation, in violation federal and/or state laws. Video of an employee working past his or her scheduled time could become evidence of wage and hour violations. Increased workplace surveillance might be argued by some to chill protected speech by employees.  These are only examples of the potential workplace risks and, of course, there are potential benefits to this kind of technology. K-5 may in fact provide greater security to employees and deter prohibited and criminal activities.

Devices like K-5 are not inherently good or bad. Rather, the purposes for which they are used and the surrounding circumstances, among other things, will determine the relevant risks and appropriateness. There certainly will be no shortage of devices like K-5 in the years to come. The message to businesses however is to understand the capabilities of these devices and carefully think through the business and workplace applications and consequences, and hope that the law soon catches up to provide some guidance.

California Healthcare Provider Defeats Data Breach Class Action on Definition of Medical Information

Written by Ann Haley Fromholz

In a victory for California healthcare providers, the California Court of Appeal recently held that a health care provider is not liable under California’s Confidentiality of Medical Information Act (CMIA) (Cal. Civ. Code, § 56 et seq.) when the health care provider releases an individual’s personal identifying information, but the information does not include the person’s medical history, mental or physical condition, or treatment.  The case was a win for the health care provider and, more importantly, provided critical clarity about the definition of “medical information” under the CMIA.

In Eisenhower Medical Center v. Superior Court of Riverside County, plaintiffs sued on behalf of a putative class whose information was disclosed by EMC when a computer with information about over 500,000 people was stolen from EMC. The information included each person’s name, medical record number, age, date of birth, and last four digits of the person’s Social Security number. The information was password protected but was not encrypted.

The CMIA makes it unlawful for a health care provider to disclose or release medical information regarding a patient of the provider without first obtaining authorization.  An individual can recover $1,000 in damages for the improper release of information, and need not show actual damage to recover the $1,000.  

The CMIA defines “medical information” as:

any individually identifiable information, in electronic or physical form, in possession of or derived from a provider of health care, health care service plan, pharmaceutical company, or contractor regarding a patient’s medical history, mental or physical condition, or treatment. ‘Individually identifiable’ means that the medical information includes or contains any element of personal identifying information sufficient to allow identification of the individual, such as the patient’s name, address, electronic mail address, telephone number, or social security number, or other information that, alone or in combination with other publicly available information, reveals the individual’s identity.

In addition, the CMIA permits acute care hospitals to disclose certain patient information upon demand and without authorization from the patient.  Section 56.16 of the CMIA allows hospitals to reveal medical information regarding the general description of the reason for the treatment, the general nature of the injury, and the general condition of the patient, as well as nonmedical information.  The court reasoned that, although section 56.16 applies only when there is a demand for information, it does show that information solely identifying a person as a patient (and nothing more) is not given the same protection as more specific information about the person’s medical history.

EMC argued that the theft of the computer did not result in a disclosure of “medical information,” as defined in the CMIA, of any of the people at issue.  The computer did not contain information about their medical history, condition, or treatment; instead, that information is saved only on EMC’s servers, which are located in its data center. While EMC conceded that the index on the computer contained “individually identifiable information,” EMC maintained that the index did not include information “regarding a patient’s medical history, mental or physical condition, or treatment,” which is required to find a violation of the CMIA. 

The court agreed, reasoning that a release of information is prohibited by the CMIA only when it includes information relating to medical history, mental or physical condition, or treatment of the individual.  The court explained that medical information does not include all patient-related information held by a healthcare provider, but must be “individually identifiable information” and also include “a patient’s medical history, mental or physical condition, or treatment.”  This definition of medical information does not encompass demographic or other information that does not reveal a patient’s medical history, diagnosis, or care.  Therefore, “medical information” as defined under the CMIA is individually identifiable information combined with substantive information regarding a patient’s medical condition or history.  When the computer was stolen from EMC, there was a release of “individually identifiable information,” but not of medical information.

In the wake of Eisenhower Regional Medical Center, medical providers should examine what information they store about patients, how that information is protected, and if information that constitutes “medical information” is segregated from mere individually identifiable information.  The provider here was saved because it kept medical information about its patients only on secure servers.  That information was not transferred to the index on a computer, which eventually was stolen.  Medical providers should consider taking similar steps to protect medical information and, in fact, would be safer if they encrypt all data about patients that is transferred to computers, especially about large groups of patients.  Although the provider prevailed here, no medical provider wants to face a similar challenge.