Since mid-2013, the Department of Health and Human Services has recovered more than $10 million from numerous entities in connection with alleged violations of the Health Insurance Portability and Accountability Act (“HIPAA”).  However, during a recent American Bar Association conference, Jerome B. Meites, a chief regional civil rights counsel at the Department of Health and Human Services (“HHS”) told attendees he expects the past 12 months of enforcement to pale in comparison to the next 12 months.  According to Mr. Meites, HHS’ Office of Civil Rights (“OCR”) desires to send a strong message to the industry through high-impact cases.

In addition to the anticipated increase in fines, Mr. Meites also said that the OCR still expects to begin conducting new rounds of HIPAA audits later this year on some of the 1,200 companies that were identified earlier this year as potential audit candidates.  These 1,200 companies include approximately 800 covered entities (health care providers, insurers, or clearinghouses) and about 400 business associates.

Mr. Meites also made two extremely pertinent comments concerning HIPAA compliance.  Specifically, he said that portable media devices have caused an enormous number of the complaints that the OCR deals with and that an entity’s failure to perform a comprehensive risk assessment, as required by HIPAA, has factored into most of the data breach cases which resulted in financial settlements.

Entities subject to HIPAA’s requirements need to be conscious of not only the planned aggressive punishment related to privacy breaches and security lapses, but also the OCR’s extensive audit strategy.   However, simply knowing that such plans are in place is not enough, and entities subject to HIPAA should begin to examine their own policies and practices and make changes as needed to address these issues.

Developed by Knightscope, the K5 Autonomous Data Machine is a 5 foot tall, 300 pound robotic device designed to be “a safety and security tool for corporations, as well as for schools and neighborhoods,” as reported by the New York Times. While K5 may not yet be ready for prime time, its developers are hoping to lure early adopters at “technology companies that employ large security forces to protect their sprawling campuses.” Eventually, K5 could be used to roam city streets, schools, shopping centers, and, yes, workplaces.

According to the Times, K5 will be equipped with a “video camera, thermal imaging sensors, a laser range finder, radar, air quality sensors and a microphone.” The stated mission of the developers is to reduce crime by 50%. They explain that data collected through K5’s sensors is “processed through our predictive analytics engine, combined with existing business, government and crowdsourced social data sets, and subsequently assigned an alert level that determines when the community and the authorities should be notified of a concern.” It is not a stretch to think that the device’s capabilities could be modified to address different applications.

Some are raising concerns that this and similar devices will take jobs away from the private security guard industry. Others believe K-5 will only add to “big brother”-type surveillance that continues to erode personal privacy. Just this week, New York City Mayor Bill de Blasio announced a substantial increase in surveillance cameras to be installed in some of the City’s public housing developments. In many settings, concerns about security are winning over concerns about privacy. Consider the assisted living, nursing home business where patient and resident abuse is driving a greater need for security at the expense of privacy and despite the need for added compliance measures under HIPAA.

K5 raises additional issues for the workplace. Having K-5 roaming around retail space, office space, common areas and so on, even if only intended to address security concerns of the business, could trigger a number of unintended consequences for employers and their employees. K-5 might capture evidence of employee negligence in connection with treatment of customers or patients. Capable of audio and video recording, K-5 could record conversations between employees, between employees and supervisors, between employees and family members and other communications that raise workplace privacy and other issues. For example, capturing a conversation between an employee and her spouse about care for their child suffering from a disease could raise issues under the Genetic Information Nondiscrimination Act. Of course, recordings like these, which could include recording of communications between employees and customers or patients, could be made without first obtaining the consent of one or all parties to the conversation, in violation federal and/or state laws. Video of an employee working past his or her scheduled time could become evidence of wage and hour violations. Increased workplace surveillance might be argued by some to chill protected speech by employees.  These are only examples of the potential workplace risks and, of course, there are potential benefits to this kind of technology. K-5 may in fact provide greater security to employees and deter prohibited and criminal activities.

Devices like K-5 are not inherently good or bad. Rather, the purposes for which they are used and the surrounding circumstances, among other things, will determine the relevant risks and appropriateness. There certainly will be no shortage of devices like K-5 in the years to come. The message to businesses however is to understand the capabilities of these devices and carefully think through the business and workplace applications and consequences, and hope that the law soon catches up to provide some guidance.

 

In a victory for California healthcare providers, the California Court of Appeal recently held that a health care provider is not liable under California’s Confidentiality of Medical Information Act (CMIA) (Cal. Civ. Code, § 56 et seq.) when the health care provider releases an individual’s personal identifying information, but the information does not include the person’s medical history, mental or physical condition, or treatment.  The case was a win for the health care provider and, more importantly, provided critical clarity about the definition of “medical information” under the CMIA.

In Eisenhower Medical Center v. Superior Court of Riverside County, plaintiffs sued on behalf of a putative class whose information was disclosed by EMC when a computer with information about over 500,000 people was stolen from EMC. The information included each person’s name, medical record number, age, date of birth, and last four digits of the person’s Social Security number. The information was password protected but was not encrypted.

The CMIA makes it unlawful for a health care provider to disclose or release medical information regarding a patient of the provider without first obtaining authorization.  An individual can recover $1,000 in damages for the improper release of information, and need not show actual damage to recover the $1,000.  

The CMIA defines “medical information” as:

any individually identifiable information, in electronic or physical form, in possession of or derived from a provider of health care, health care service plan, pharmaceutical company, or contractor regarding a patient’s medical history, mental or physical condition, or treatment. ‘Individually identifiable’ means that the medical information includes or contains any element of personal identifying information sufficient to allow identification of the individual, such as the patient’s name, address, electronic mail address, telephone number, or social security number, or other information that, alone or in combination with other publicly available information, reveals the individual’s identity.

In addition, the CMIA permits acute care hospitals to disclose certain patient information upon demand and without authorization from the patient.  Section 56.16 of the CMIA allows hospitals to reveal medical information regarding the general description of the reason for the treatment, the general nature of the injury, and the general condition of the patient, as well as nonmedical information.  The court reasoned that, although section 56.16 applies only when there is a demand for information, it does show that information solely identifying a person as a patient (and nothing more) is not given the same protection as more specific information about the person’s medical history.

EMC argued that the theft of the computer did not result in a disclosure of “medical information,” as defined in the CMIA, of any of the people at issue.  The computer did not contain information about their medical history, condition, or treatment; instead, that information is saved only on EMC’s servers, which are located in its data center. While EMC conceded that the index on the computer contained “individually identifiable information,” EMC maintained that the index did not include information “regarding a patient’s medical history, mental or physical condition, or treatment,” which is required to find a violation of the CMIA. 

The court agreed, reasoning that a release of information is prohibited by the CMIA only when it includes information relating to medical history, mental or physical condition, or treatment of the individual.  The court explained that medical information does not include all patient-related information held by a healthcare provider, but must be “individually identifiable information” and also include “a patient’s medical history, mental or physical condition, or treatment.”  This definition of medical information does not encompass demographic or other information that does not reveal a patient’s medical history, diagnosis, or care.  Therefore, “medical information” as defined under the CMIA is individually identifiable information combined with substantive information regarding a patient’s medical condition or history.  When the computer was stolen from EMC, there was a release of “individually identifiable information,” but not of medical information.

In the wake of Eisenhower Regional Medical Center, medical providers should examine what information they store about patients, how that information is protected, and if information that constitutes “medical information” is segregated from mere individually identifiable information.  The provider here was saved because it kept medical information about its patients only on secure servers.  That information was not transferred to the index on a computer, which eventually was stolen.  Medical providers should consider taking similar steps to protect medical information and, in fact, would be safer if they encrypt all data about patients that is transferred to computers, especially about large groups of patients.  Although the provider prevailed here, no medical provider wants to face a similar challenge.

An Office for Civil Rights (OCR) report issued this month reveals some interesting details about data breach activity under HIPAA, as well as some helpful reminders and recommendations for covered entities and business associates. Section 13402(i) of the HITECH Act requires the Secretary of Health and Human Services to submit a report to various Senate and House Committees containing the number and nature of breaches reported to the Secretary, and the actions taken in response to those breaches. The most recent report covers calendar years 2011 and 2012.

After summarizing the breach notification rules, the report confirms that OCR opens compliance reviews to investigate all reported breaches affecting 500 or more individuals, and it may do so even for reported breaches affecting fewer than 500 individuals. The Department reports that as of the date of the report it has entered into seven resolution agreements/corrective action plans totaling more than $8 million in settlements resulting from breach incidents reported to OCR.

The report provides a detailed analysis of breach activity between the years 2009 through 2012, which includes identifying the general causes of the breaches, the types of entities affected by the breaches, and the location of the protected health information (PHI) when breached. It also provides examples of the kinds of steps taken by covered entities and business associates that experienced data breaches to mitigate the potential consequences of the breaches and prevent future breaches:

  • Revising policies and procedures;
  • Improving physical security by installing new security systems or by relocating equipment or records to a more secure area;
  • Training or retraining workforce members who handle PHI;
  • Providing free credit monitoring to customers;
  • Adopting encryption technologies;
  • Imposing sanctions on workforce members who violated policies and procedures;
  • Changing passwords;
  • Performing a new risk assessment; and
  • Revising business associate agreements.

What is perhaps most helpful in this report is the “Lessons Learned” section that describes areas to which covered entities and business associates should pay particular attention in their compliance efforts to help avoid common types of breaches. We’ve summarized these below:

  • Risk Assessment. Perform and document a thorough risk assessment and address vulnerabilities identified. Pay particular attention to mobile devices – digital copiers, USB drives, laptop computers, mobile phones – and ePHI transmitted across networks.
  • Evaluate Changes In Operations, Office Moves/Renovations and Mergers/Acquisitions. The risk assessment process is not a one-time activity. As the business changes, moves and expands, covered entities and business associates need to evaluate how these changes affected their data privacy and security program.
  • Portable Electronic Devices. The risks here are obvious and significant attention needs to be given to the kinds of safeguards that are appropriate, including encryption.
  • Proper Disposal. Have a plan for disposing PHI that is no longer needed, including on electronic devices and equipment that store PHI, as well as PHI maintained by vendors.
  • Physical Access Controls. Focusing on IT and PHI in electronic format should not be at the exclusion of traditional physical safeguards, such as controls on access to facilities and workstations that maintain PHI, which benefit PHI in all forms.
  • Training. This is critical to making sure that employees and other workforce members not only understand the applicable safeguards, but also to create a sense of awareness and a culture of privacy and security within the organization.

Recently, the Federal Trade Commission (“FTC”) filed a limited objection in bankruptcy court to the proposed sale of assets of ConnectEdu, Inc. (“ConnectEdu”) on the grounds that the company’s privacy policy protecting customer personal information had potentially not been complied with.

Specifically, ConnectEdu, an education technology company that provided interactive tools to assist students, parents and school counselors in career planning sought to sell substantially all of its assets in a Chapter 11 bankruptcy proceeding in the Bankruptcy Court for the Southern District of New York.  (In Re ConnectEdu, Inc., et al., Case No. 14-11238 United States Bankruptcy Court for the Southern District of New York.)  According to the FTC, ConnectEdu collected a substantial amount of personal information from high school and college student customers including names, dates of birth, addresses, email addresses, telephone numbers and additional information.

ConnectEdu’s privacy policy provided that this personal information would generally not be distributed to third parties except without express written consent and direction of their customers.  Interestingly, ConnectEdu’s privacy policy also expressly provided:

In the event of sale or intended sale of the Company, ConnectEdu will give users reasonable notice and an opportunity to remove personally identifiable data from the service.

The FTC filed an objection to the asset sale on the grounds that ConnectEdu had potentially not complied with Section 363(b)(1)(A) of the Bankruptcy Code which requires a debtor not to sell personally identifiable information about an individual subject to a privacy policy unless the sale is consistent with the policy.  The FTC took the position that ConnectEdu’s customers may not have been provided notice and an opportunity to remove their personal information upon the potential sale of assets.  The FTC alleged that failure to comply with ConnectEdu’s privacy policy in regard to the sale of assets could violate the Bankruptcy Code as well as FTC’s prohibition against “deceptive acts or practices in or affecting commerce” (15 U.S.C. § 45(a)) exposing both the buyer and seller of the assets to liability.

This is not the first time the FTC has addressed the statements in a website privacy policy in the bankruptcy context, even where a privacy ombudsman had been appointed under Bankruptcy Code section 332. Note also that California’s California Online Privacy Protection Act regulates website content by requiring operators of commercial websites to conspicuously post a privacy policy if they collect personally identifiable information from Californians. On May 21st, 2014, California’s Attorney General, Kamala D. Harris, provided guidance for complying with the Act and recent amendments to it in 2013, requiring privacy policies to include information on how the operator responds to Do Not Track signals or similar mechanisms.  

These cases and the recent activity in California highlight the need for all companies to review their website privacy policies generally, and specifically as they relate to how personal information collected on the sites may be used and disclosed, including in connection with the potential sale of the business.  Absent such review, uses and disclosures that may be considered usual and customary, and otherwise permissible, could subject a company to significant legal liability and monetary exposure stemming from the terms of the company’s own website policy.

Add Oklahoma to the list of states prohibiting employers from requesting or demanding access to the personal social media accounts of employees or applicants. Signed into law by Gov. Mary Fallin, H.B. 2372 becomes effective November 1, 2014.

In addition to being prohibited from requesting or demanding usernames or passwords from employees or applicants to their personal social media accounts, the new law makes clear that Oklahoma employers cannot demand that employees or applicants access those accounts in the presence of the employer, allowing the employer to see the contents of those accounts. As in other states with similar laws, employees and applicants that refuse to provide access to their personal social media accounts generally cannot be fired, disciplined, denied employment, or otherwise penalized.

Employers may, however, request or demand access information to information systems or electronic communications devices owned or subsidized by the employer, as well as any accounts or services provided by the employer “or that the employee uses for business purposes.” It will be interesting to see whether this language will be interpreted to apply to accounts such as LinkedIn, which employees might use for business purposes – e.g., connecting with customers or clients of the employer. The Act also does not prohibit employers from engaging in certain investigations, such as where the employer has specific information about activity on the employee’s personal social media account and the investigation is for the purpose of ensuring compliance with applicable laws regulatory requirements, or prohibitions against work-related employee misconduct.

The law protects Oklahoma employers from inadvertently acquiring the access information for employee personal social media accounts, so long as the employer does not use that information to access the accounts. However, the law states:

Neither this section nor any other Oklahoma law shall prohibit an employer from reviewing or accessing personal online social media accounts that an employee may choose to use while utilizing an employer’s computer system, information technology network or an employer’s electronic communication device.

So, while employers cannot ask employees for their usernames or passwords to personal social media accounts, it appears employers can monitor the activities and communications of employees made in their personal social media accounts when the employees access their accounts through employer-provided information systems, networks or devices. Employer should exercise caution here as federal law and laws in other states may be triggered, such as the Stored Communications Act.

Employers may have legitimate needs to access employee or applicant personal social media or other online accounts – such as in cases involving theft of trade secrets, disclosures of confidential information and other reasons. However, as these state laws develop, employers will need to be careful in determining which law applies and what the applicable law permits them to do, particularly for larger multi-state employers.

 

Baltimore, MD has joined the growing list of cities and states around the country implementing “ban the box” legislation.  “Ban the box” legislation restricts inquiries regarding an applicant’s criminal history on applications for employment and during job interviews.  The EEOC recommends “banning the box” believing the use of conviction records excludes applicants and can disparately impact minorities.

The Baltimore Ordinance prohibits employers (who employ 10 or more full-time employees in the city of Baltimore) from, at any time before a conditional offer of employment has been made:  requiring an applicant for employment to disclose or reveal whether he or she has a criminal records or otherwise has had a criminal accusation brought against her or him; conducting a criminal-record check on the applicant; or otherwise making an inquiry of the applicant or others about whether the applicant has a criminal record or otherwise has had criminal accusations brought against her or him.

While many “ban the box” laws only apply to public employers, more and more jurisdictions have passed these laws applying to private employers.  For example, the states of Hawaii, Massachusetts, Minnesota, and Rhode Island have such laws, while the cities of Buffalo, NY, Newark, NJ, Philadelphia, PA, and San Francisco, CA have also enacted “ban the box” laws.  While these jurisdictions currently have laws on the books, similar legislation is pending in numerous states and cities throughout the country.

With an effective date of August 13, 2014, employers with operations in Baltimore, MD need to review their hiring processes to determine what, if any, changes will need to be made to comply.

As we previously reported, the Florida legislature was considering joining numerous other states which have banned employers from requesting or requiring access to current or prospective employees’ social media accounts.

Senate Bill SB198, which was entitled “An Act Relating to Social Media Privacy,” has died in committee.  As such, Florida will not be joining the other states which have already enacted similar laws. Those states include Arkansas, Colorado, New Mexico, Oregon, Utah, Vermont and Washington, California, Illinois, Maryland, Michigan, Nevada, and New Jersey.

At this stage, it is unclear whether a new bill will be proposed, but it appears that the efforts to prohibit such employer activity in 2014 have failed.

Effective January 1, 2015, Tennessee employers, including government entities, will be prohibited from requesting or requiring access to the private social networking or online accounts of employees and job applicants under the Volunteer State’s “Employee Online Privacy Act of 2014,” signed by Governor Bill Haslam. Our Tennessee colleagues outline the key provisions of the law, including some of the key exceptions.

The exceptions will be helpful for employers. For example, some of the them permit employers to:

  • request or require a username or password to access an electronic communications device supplied by or paid for wholly or in part by the employer, as well as to access an account or service provided by the employer and obtained by virtue of the employment relationship or used for the employer’s business purposes;
  • monitor, review, access, or block electronic data stored on an electronic communications device supplied by or paid for wholly or in part by the employer, or stored on an employer’s network, in accordance with state and federal law; and
  • View, access, or use information about an employee or applicant that can be obtained without violating the prohibited conduct or information that is available in the public domain.

There are other exceptions in the Tennessee law, but not all of the same exceptions exist in the laws enacted in the other states across the country, such as Arkansas, California, Colorado, Illinois, Maryland, Michigan, Nevada, New Jersey, New Mexico, Oregon, Utah, Washington, and more recently in Wisconsin. Employers will need to be careful in navigating these laws nationwide. At the same time, as more employers explore BYOD and other monitoring technologies, including spyware and keylogging, they will need to consider the risks those practices and technologies may present under statutes like the one in Tennessee.