Massachusetts Senator Elizabeth Warren recently introduced legislation which would ban employers from conducting credit checks of prospective employees during the hiring process.  Known as the Equal Employment for All Act, the measure would amend the Fair Credit Reporting Act to prohibit employers from using consumer credit reports to make employment decisions.  Notably, the Act would permit exceptions for certain positions, e.g. those requiring national security clearance.

According to Senator Warren,

It was once thought a credit history would provide insight into a person’s character and today, many companies routinely require credit reports from job applicants, but research has shown that an individual’s credit rating has little to no correlation with his or her ability to succeed in the workplace.  A bad credit rating is far more often the result of unexpected medical costs, unemployment, economic downturns, or other bad breaks than it is a reflection on an individual’s character or abilities.  Families have not fully recovered from the 2008 financial crisis, and too many Americans are still searching for jobs. This is about basic fairness — let people compete on the merits, not on whether they already have enough money to pay all their bills.

The legislation is supported by a number of worker advocacy and civil rights groups, although no companion measure has been introduced in the House.  The Act, would appear to mirror the goals of numerous state laws which already prohibit employers from utilizing credit information in making employment decisions.

A report issued by the Department of Health and Human Services Office of Inspector General (“OIG”) concludes that the Office for Civil Rights (“OCR”) did not meet all of its federal requirements for oversight and enforcement of the HIPAA Security Rule. While the report noted OCR met some of these requirements, it also found that:

  • OCR had not assessed the risks, established priorities, or implemented controls for its HITECH requirement to provide for periodic audits of covered entities to ensure their compliance with Security Rule requirements.
  • OCR’s Security Rule investigation files did not contain required documentation supporting key decisions because its staff did not consistently follow OCR investigation procedures by sufficiently reviewing investigation case documentation.

OIG also found that OCR had not fully complied with Federal cybersecurity requirements for its information systems used to process and store investigation data. The report recommended that OCR:

  • assess the risks, establish priorities, and implement controls for its HITECH auditing requirements;
  • provide for periodic audits in accordance with HITECH to ensure Security Rule compliance at covered entities;
  • implement sufficient controls, including supervisory review and documentation retention, to ensure policies and procedures for Security Rule investigations are followed; and
  • implement the NIST Risk Management Framework for systems used to oversee and enforce the Security Rule.

OCR’s Response. In its response to OIG’s findings, attached as an appendix to the report, OCR generally concurred with OIG’s recommendations and described actions it has taken to address them. OCR’s response to the report provides valuable information to companies as they develop their HIPAA compliance programs, including:

  • From 2008 through 2012, OCR obtained corrective action from covered entities in more than 13,000 cases where they found noncompliance with HIPAA and reached resolution agreements in 11 cases with payments totaling approximately $10 million.
  • The findings from the pilot audits OCR ran in 2012 indicate that covered entities generally have more difficulty complying with the Security Rule than other aspects of HIPAA and that small covered entities struggle with HIPAA compliance in each of the assessment areas – privacy, security and breach notification.
  • Future audits “are less likely to be broad assessments generally across the Rules and more likely to focus on key areas of concern for OCR identified by new initiatives, enforcement concerns, and Departmental priorities.”

OCR’s response also noted that no monies have been appropriated for a permanent audit program. However, covered entities and business associates should not see this lack of funding for a permanent audit program as giving them a pass on HIPAA compliance. The report makes clear that OCR must find a way to meet its audit requirements under HIPAA.

OCR’s recent enforcement activity also demonstrates a commitment to holding companies accountable under HIPAA. In 2013 (through December 20), OCR reached five resolution agreements with payments totaling approximately $3.7 million. These figures from a single calendar year represent nearly half the total number of resolution agreements and payments that OCR obtained over the five-year period from 2008 through 2012.

In this enforcement environment, it is imperative that covered entities and business associates regularly review their HIPAA compliance program and implement ongoing HIPAA training for their employees.

Fingerprints, voice prints and vein patterns in a person’s palm are three examples of biometrics that may be “moving into the consumer mainstream to unlock laptops and smartphones, or as a supplement to passwords at banks, hospitals and libraries,” reports Anne Eisenberg at the New York Times. Of course, these technologies, aimed at increasing security and, to a lesser degree, convenience, raise data privacy concerns and other risks. However effective, convenient, and efficient these technologies may be, companies need to think through carefully their adoption and implementation, particularly in the workplace.

Below are just a few of the kinds of questions companies should be asking before implementing technologies that involve capturing biometric information.  It is likely that such technologies will go mainstream and, if so, spawn new laws regulating the use of biometric information. Thus, companies using such technologies will need to continue to monitor the legal landscape to manage their risks.

Can we collect this information? In some cases, the answer may be no. For example, in New York, Labor Law Section 201-a prohibits the fingerprinting of employees by private employers, unless required by law. However, according to an opinion letter issued by the State’s Department of Labor on April 22, 2010, a device that measures the geometry of the hand is permissible as long as it does not scan the surface details of the hand and fingers in a manner similar or comparable to a fingerprint. Other states may permit the collection of biometric information provided certain steps are taken. The Illinois Biometric Information Privacy Act, for instance, prohibits private entities from obtaining a person’s or customer’s biometric identifier or biometric information unless the person is informed in writing and consents in writing.

If we can collect it, do we have to safeguard it?  Regardless of whether a statute requires a business to safeguard such information, we believe it is good practice to do so. However, states such as Illinois (see above) already require a reasonable standard of care when storing, transmitting or disclosing biometric information.

Is there a notification obligation if unauthorized persons get access to biometric information? In some states the answer is yes.  The breach notification statutes in states such as Michigan include biometric data in the definition of personal information. MCLS § 445.72

Are there any requirements for disposing of this information? Yes, a number of states (e.g., Colorado and Massachusetts) require that certain entities meet minimum standards for properly disposing records containing biometric information.

Can employees claim this technology amounts to some form of discrimination? In addition to securing devices and accounts, biometric technologies also are being used to track employee time and attendance in order to enhance workforce management. These different applications can form the basis of discrimination claims. For example, earlier in 2013, the U.S. Equal Employment Opportunity Commission (EEOC) claimed an employer’s use of a biometric hand scanner to track employee time and attendance violated federal law by failing to accommodate certain religious beliefs which opposed the use of such devices.

Retinal scan technology is another biometric technology that can be used for identification/security purposes.  However, as explained in a recent Biometric.com article, “examining the eyes using retinal scanning can aid in diagnosing chronic health conditions such as congestive heart failure and atherosclerosis…[as well as] diseases such as AIDS, syphilis, malaria, chicken pox and Lyme disease [and] hereditary diseases, such as leukemia, lymphoma, and sickle cell anemia.” Thus, the data captured by such scans can inform employers about the health conditions of their employees, raising a range of medical privacy, medical inquiry and discrimination issues under federal and state laws, such as the Americans with Disabilities Act. 

Privacy and data security issues and concerns do not stop at the water’s edge. Companies needing to share personal information, even when the sharing will take place inside the same “company,” frequently run into challenges when that sharing takes place across national borders. In some ways, the obstacles created by the matrix of federal and state data privacy and security laws in the U.S. are dwarfed by the matrix that exists internationally. Most countries regulate to some degree the handling of data, from access, to processing, to disclosure and destruction. And, the law continues to develop rapidly, sometimes due to unexpected events.

Take, for example, the U.S. Safe Harbor program that was designed to facilitate the transfer of personal data of individuals in the European Union (EU) to the United States. Because the EU believes that the law in some countries, including the U.S., fails to provide “adequate safeguards,” the general rule is that personal data of EU persons cannot be sent to the U.S. unless an exception applies. One exception is based on a negotiated deal between the EU and the U.S., commonly known as the U.S. Safe Harbor, a program which currently is in some jeopardy due to the recent reports of NSA monitoring, Snowden, etc.

Currently, to meet the Safe Harbor, a company must take certain steps, including (i) appointing a privacy ombudsman; (ii) reviewing and auditing data privacy practices; (iii) establishing a data privacy policy that addresses the following principles: notice, choice, onward transfer of data, security, integrity, access and enforcement; (iv) implementing privacy and enforcement procedures; (v) obtaining consents and creating inventory of consents for certain disclosures; and (vi) self-certifying compliance to the U.S. Department of Commerce.

A recent statement from Viviane Reding, European Commissioner for Justice, Fundamental Rights and Citizenship, quoted in The Guardian, October 17, 2013, signals some changes may be in store for the Safe Harbor:

The Safe Harbour may not be so safe after all. It could be a loophole because it allows data transfers from EU to US companies, although US data protection standards are lower than our European ones,” said Reding. “Safe Harbour is based on self-regulation and codes of conduct. In the light of the recent revelations, I am not convinced that relying on codes of conduct and self-regulation that are not policed in a strict manner offer the best way of protecting our citizens.

At the same time, the EU continues to update and strengthen its protections for personal data. Companies that operate globally need to be sensitive to not only complying with the laws specific to activities within a jurisdiction, but also to activities between jurisdictions. Common business decisions such as deciding where data will be stored, setting up global databases for employees medical, personnel and other information, arranging for enterprise-wide employee benefits or monitoring programs, can face significant obstacles relating to the interplay of the data privacy and security laws of the countries involved.

A familiar story – small health care provider suffers a data breach affecting patient data, reports incident to the federal Office for Civil Rights (OCR) and winds up becoming subject to an OCR investigation that goes well beyond the breach itself, resulting in a significant settlement payment and corrective action plan.

In this case, a relatively small adult and pediatric dermatology practice in Concord, Massachusetts has agreed to settle potential violations of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Privacy, Security, and Breach Notification Rules, agreeing to a $150,000 payment and a comprehensive corrective action plan that is subject to OCR review.

So what did OCR allege the provider did wrong that led to this settlement and corrective action plan?

By way of background, on October 7, 2011, the provider reported to HHS a breach of its unsecured electronic protected health information (ePHI) that resulted when an unencrypted thumb drive that stored ePHI concerning surgeries of approximately 2,200 individuals was stolen from an employee’s car. The provider notified its patients within 30 days of the theft and provided media notice. On November 9, 2011, HHS notified the provider that OCR intended to investigate the provider’s compliance with the Privacy, Security, and Breach Notification Rules.

Providers and other covered entities need to realize that if they experience an unexpected theft or other event that results in a reportable breach, it may very well open them up to a compliance review by the OCR.

What potential violations of HIPAA did the OCR allege based on its investigation?

  • The provider did not conduct an accurate risk assessment until October 1, 2012.
  • The provider did not fully comply with the Breach Notification Rule, which includes having written policies and procedures and training workforce members regarding those policies and procedures until February 7, 2012.
  • The provider failed to reasonably safeguard the thumb drive that wound up being stolen.

Thus, the issue seems to be not so much whether the covered entities appropriately responded to the breach at hand, but whether they were compliant with the Privacy, Security, and Breach Notification Rules prior to the incident and could have avoided the breach. As suggested here, taking compliance steps after the incident will not shield the covered entity from OCR enforcement, although it may have softened the blow.

Lesson for providers and other covered entities: Don’t wait until you lose a thumb drive before getting compliant.

The Federal Financial Institutions Examination Counsel (FFIEC) recently issued supervisory guidance entitled “Social media:  Consumer Compliance Risk Management Guidance.”  Financial institutions are expected to use the Guidance in their efforts to ensure that their policies and procedures provide oversight and controls commensurate with the risks posed by their involvement in social media.

The Guidance was published to address the applicability of federal consumer protection and compliance laws, regulations, and policies to activities conducted via social media by banks, savings associations, and credit unions, as well as by nonbank entities supervised by the Consumer Financial Protection Bureau (CFPB). Notably, the Guidance does not impose any new requirements on financial institutions, but instead is a guide to help financial institutions understand the applicability of existing requirements and supervisory expectations associated with the use of social media.

According to FFIEC, the use of social media by a financial institution to attract and interact with customers can impact a financial institution’s risk profile. The increased risks can include the risk of harm to consumers, compliance and legal risk, operation risk, and reputation risk. The Guidance is meant to help financial institutions identify potential risk areas to appropriately address, as well as to ensure institutions are aware of their responsibility to oversee and control these risks within their overall risk management program.

The Guidance specifies that a financial institution should have a risk management program that allows it to identify, measure, monitor, and control the risk associated with social media and should be designed with participation from specialists in compliance, technology, information security, legal, human resources, and marketing. Involving all of these specialists underscores the need for an institution to have a uniform approach to social media, with input from all facets of the institutions hierarchy. The risk management program should include:

  • A clearly defined governance structure;
  • Policies and procedures for use and monitoring of social media;
  • A risk management process for selecting and managing third-party relationships;
  • An employee training program on social media including the institutions policies and procedures of official, work-related use of social media, and potentially for other uses of social media, including defining impermissible activities;
  • An oversight process for monitoring information posted to proprietary social media sites;
  • Audit and compliance functions; and
  • Parameters for providing appropriate reporting to the institution’s board of directors or senior management.

While the Guidance is intended to help financial institutions understand and successfully manage the risk associate with the use of social media, the Office of the Comptroller of the Currency (OCC), the Board of Governors of the Federal Reserve System (Board), the Federal Deposit Insurance Corporation (FDIC), the National Credit Union Administration (NCUA), and the CFPB will all use it as a supervisory guidance for the institutions they supervise and the State Liaison Committee of the FFIEC has encouraged state regulators to adopt the Guidance.

On December 13, 2013, Fordham Law School’s Center on Law and Information Policy published a study (Study) that paints a sobering picture of how many public schools across the country handle student data, particularly with respect to data they store and services they (and students) use in the “cloud.” There is little doubt that many school districts are strapped for cash and, indeed, utilizing cloud services provides a new opportunity for significant cost savings. However, according to the Study, some basic, low-cost safeguards to protect the data of the children attending these public school are not in place.

For example, some of the Study’s key findings include:

  • 95% of districts rely on cloud services for a diverse range of functions including data mining related to student performance, support for classroom activities, student guidance, data hosting, as well as special services such as cafeteria payments and transportation planning,
  • only 25% of districts inform parents of their use of cloud services,
  • 20% of districts fail to have policies governing the use of online services, and
  • with respect to contracts negotiated by districts with cloud service providers
    • they generally do not provide for data security and allow vendors to retain student information in perpetuity,
    • fewer than 25% specify the purpose for disclosures of student information,
    • fewer than 7% restrict the sale or marketing of student information, and
    • many districts have significant gaps in their contract documentation.

A data  breach can be significant for any organization, and school districts are not immune. Parents are also beginning to pressure districts for more action, particularly as children can be an attractive target for identity theft.

The Fordham Study provides a number of helpful recommendations for public school districts. Indeed, based on the Study and consistent with basic data privacy and security principles (not to mention FERPA and other laws concerning the safeguarding of student data), there seems to be quite a bit of low-hanging fruit school districts can use to address the risks identified. These include, for example, establishing basic, written privacy policies and procedures that apply to cloud and similar services, implementing more thorough vetting of vendors handling sensitive personal information, and adopting and implementing for consistent use a set of strong privacy and security contract clauses when negotiating with all vendors that will access personal and other confidential information.

Check out our labor colleagues’ recent post (see Labor & Collective Bargaining blog) concerning the permissibility of a policy to prohibit audio/video recording in the workplace under the National Labor Relations Act, and the decision in Whole Foods Market, Inc., Case No. 1-CA-96965 (10/30/13).

Most of us do not go too far – whether at work or at home – without our favorite smartphone, tablet or other mobile device(s) in hand. The audio and video recording capabilities on these devices are standard equipment these days and increasingly sophisticated, and in some cases can be quite surreptitious. For many employers, that functionality makes it more difficult to, among other things: (i) safeguard proprietary and confidential company information, trade secrets, and personal information, (ii) maintain employee, customer and/or patient privacy, (iii) control internal communications, (iv) prevent spoliation of data, and (v) avoid discrimination and harassment activity. So, it is not hard to see why many employers would want to prohibit this activity in the workplace. When doing so, all employers certainly should consider the labor law issues discussed in our colleagues’ post and craft a clear and practical policy.

But what should employers consider when drafting a policy that prohibits certain photography/recording in the workplace? Here are some thoughts:

  • Be clear. The policy should not leave employees to wonder about when recording is prohibited and by whom. For example, taking photos and recordings may be prohibited in certain circumstances, for certain events/information, or by certain company employees, but not at other times, consistent with applicable law.
  • Be technology neutral. Your policy should be written to cover new devices/technolgies that enter the market without having to be amended.
  • Keep in mind that not all recording is bad. In many cases, photos and audio or video recordings can benefit the business. For example, video recording could significantly enhance training and documentation capabilities.  
  • Avoid unambiguous language. Overbroad language can create legal risks and confusion for employees. For example, prohibiting employees from engaging in “any and all” recording in the workplace would likely be impermissible under the NLRA.  
  • Be practical and consistent in implementation and enforcement. In some cases, a policy might not be enough to address the potential risks. So, a company may want to consider not allowing devices to be present when performing certain functions. And, like all policies, disciplining some employees and not others for doing the same thing creates a range of risks.
  • Require consents/releases when needed. When photos or recordings are permitted and made for a commercial purpose, a number of states (e.g., California and New York) have statutory and/or common law protections. In general, a written consent is required. In addition to getting the individuals’ consent, the company also may want to obtain from the person(s) sufficient rights to the images captured (and as may be edited) for the intended uses, as well as a release from claims concerning such uses. 
  • Address how the photos/recordings should be handled. When photos or recordings are needed for business purposes, employees should be advised about appropriate document management practices to ensure the photos/recordings are properly made, filed, saved, safeguarded, and destroyed when no longer needed. For example, photos and recordings could capture information that constitutes protected health information under HIPAA. In that case, employees need to be advised about and trained with respect to the applicable HIPAA policies and procedures.
  • Inform employees of the risks of making and using electronic photos and recordings. For example, when snapping photos or recording, employees may not be thinking about what is in the background visually or what sounds or conversations are being captured.  They also may not think about how quicky and broadly these files can be shared if they are not careful. 

 

In a recent consent order, the New Jersey Division of Consumer Affairs settled an investigation involving Dokogeo, Inc., a California based mobile application developer.

Under the Children’s Online Privacy Protection Act (“COPPA”) websites and online services which collect information from children younger than 13 are subject to certain parental notice and consent requirements.

In the Dokogeo investigation, the state alleged that COPPA and the Federal Trade Commission’s COPPA Rule were violated when the personal information of children was collected during the children’s use of a geolocation scavenger hunt application that uses animated cartoon characters.  Specifically, the state alleged that by utilizing animation and a child-themed storyline, the app is directed at children and adults which would subject the website to COPPA.  Additionally, the state asserted the app collects personal information as defined under COPPA, including photographs, geolocation information and e-mail addresses.  Further, and perhaps most importantly, the state alleged that the app’s privacy policy (which would detail the company’s data collection practices) did not obtain verifiable parental consent prior to the collection of personal information from children and no link to the privacy policy was provided on the home page.

In the consent order, the company denies that the app is directed at children, however, the order requires the company to clearly and conspicuously disclose in its apps and on the home page of its websites the types of personal information it collects, the manner in which it uses the information and whether it shares information with third parties.  Additionally, the order requires the company to verify that anyone using any of its apps that collect personal information is older than 13.  The order further specifies that if the company fails to comply with the restraints and conditions of the settlement agreement, or violates consumer fraud or child online privacy laws, at any point in the next 10 years it will be responsible for a $25,000 “suspended penalty.”

This matter, and numerous others throughout the country, highlight the need for companies to review their data collection practices and privacy policies to ensure COPPA compliance.

Following up on my recent post on Google Glass and its impact on the workplace, I had the opportunity to speak with Colin O’Keefe of LXBN on the subject. In the brief video interview I explain the general workplace issues it presents and also touch on the potential data management concerns.