Banks Cannot Skirt Contract Remedies in Data Breach Suit Against Retail Merchant

Attempting to advance a novel theory of law, several banks filed a class action in Illinois federal court against a grocery store chain arising out of a data breach that resulted in the theft of 2.4 million credit and debit cards. Community Bank of Trenton v. Schnuck Markets, Inc. After the breach, and based on the terms credit card user agreements, the banks were required to issue new cards and reimburse its customers as required by federal law for financial losses due to unauthorized purchases. In the suit, the financial institutions sought to recover some of their costs from the grocery store chain that was allegedly responsible for the loss of the data. The losses were estimated by the Plaintiffs to be in the tens of millions of dollars. As discussed below, the banks were not successful.

The core question in the case was whether any applicable law provided the cardholders’ banks with a remedy under tort law against a retail merchant who was the subject of a data breach.

Generally speaking the credit card issuing bank, here the Community Bank of Trenton, has a contractual relationships with the consumers to whom the cards are issued and the credit card network, e.g., Visa, Mastercard. The issuing bank does not have a direct relationship with the retail merchant, here Schnuck Markets. From the perspective of a bank such as Community Bank of Trenton, its remedies arise from (a) the contract between it and the consumer, (b) the contract between it and the credit card network, or (c) by operation of federal law that provides limited reimbursement.

Seeking an end around these relationships, the class of banks invoked common law tort theories to go directly against the retail merchant because there was no contractual remedy that would make them whole for their losses.

The banks claimed in part that the merchant was negligent – not in permitting the breach to occur – but in not recognizing that it had occurred for months thereafter. And, once the chain did learn of the breach, it was another two weeks before it was announced publicly. The Plaintiffs alleged that numerous security steps could have prevented the breach and that those steps were required by the credit card network rules (e.g., installing antivirus software, maintaining firewalls, encrypting sensitive data, and implementing two-factor authentication).

Despite seemingly compelling arguments, the Seventh Circuit ultimately upheld the lower court’s dismissal of the banks’ claims finding that they were bound by the contractual provisions of their agreements. Essentially, the court ruled, by joining the credit card system, the banks accepted some risk of not being fully reimbursed for the costs of another party’s mistakes.

With the increasing amount of data breaches occurring in every sector of the economy we can anticipate more and more litigation, including the attempts to assert novel theories to recover significant losses resulting from the breaches.

Below are additional resources on data breach litigation:

The U.S. Supreme Court Dismisses U.S. v. Microsoft Following Passage of the CLOUD Act

On April 17th, the U.S. Supreme Court dismissed the highly anticipated U.S. v. Microsoft, ruling that recently enacted legislation rendered the case moot. Microsoft Corp. had been in litigation with the U.S. Department of Justice (DOJ) for several years over the issue of whether Microsoft must comply with a U.S. search warrant for access to customer’s emails and other personal data within its “possession, custody or control”, regardless of whether such data is stored within the U.S. or abroad. The Supreme Court’s ruling has been anticipated since March, when President Trump signed into law the Clarifying Law Overseas Use of Data Act (CLOUD Act), H.R. 4943, which amends a provision of the Electronic Communications Privacy Act of 1986 (ECPA), clarifying the federal government’s authority to access U.S. individuals’ data and communications stored abroad.

The dispute between Microsoft and the DOJ arose in 2013, when prosecutors served Microsoft with a warrant issued under the Stored Communications Act of 1986 (SCA), a provision of the ECPA, demanding that the company turn over personal emails and data of a user account associated with a criminal drug trafficking investigation. Microsoft complied with the warrant to the extent that such data was stored on servers in the U.S. However, a portion of the requested data was stored on a server in Ireland that Microsoft refused to turn over.

The Supreme Court agreed to hear the dispute in October 2017, after the U.S. Court of Appeals for the 2nd Circuit, in July 2016, quashed the warrant issued by the DOJ, holding in favor of Microsoft, which the DOJ appealed. In oral arguments before the Supreme Court in February, the DOJ and federal law enforcement argued that technology companies are disrupting criminal investigations in their refusal to turn over cloud data stored on servers abroad. It should not matter where data is stored if it can be accessed “domestically with a click of a computer mouse”, the DOJ argued. Conversely, Microsoft argued that the SCA, the basis for the DOJ’s warrant, was not equipped to address new technologies and usage.

The CLOUD Act, enacted on March 22nd, clarifies the federal government’s authority to compel data stored abroad and creates new procedures for issuing such warrants. The new legislation also affords a company the opportunity to move to quash a warrant on the basis that there is a “material risk” that the demand would violate foreign law.

Following passage of the CLOUD Act, the DOJ filed a motion to dismiss the case on grounds that the new legislation rendered the dispute moot, and stated that it would withdraw the original warrant and reissue a new one under the procedural requirements of the CLOUD Act, to which Microsoft, in a subsequent motion, agreed. “There is no reason for this court to resolve a legal issue that is now of only historical interest,” Microsoft stated in its motion.

The CLOUD Act has been broadly supported by both law enforcement and the technology sector, both in agreement that the 30-year-old SCA was in need of significant updates. Full implications of the new legislation will take time to become evident.

Massachusetts Enacts Law Providing Greater Privacy of Health Insurance Information

Health insurance carriers often provide explanation of benefits (EOB) summaries to the policyholder specifying the type and cost of health care services received by dependents covered by the policy. EOBs often disclose sensitive information regarding the mental or physical health condition of adult dependents. Massachusetts has now enacted a law, an act to protect access to confidential health care (the PATCH Act), that permits patients to require their insurance carriers to send their medical information only to them as opposed to the policyholder. This will permit a spouse or adult child of the policyholder to keep medical information from being shared with the policyholder. The law also requires insurance carriers to use a common summary of payments form to be developed by the Massachusetts Division of Insurance. The law takes effect April 1, 2019; however, any carrier that has the capacity to provide electronic access to common summary of payments forms prior to that date must do so.

This new Massachusetts law affords individuals greater privacy protections than HIPAA with respect to heath information communicated by insurance carriers. For example, HIPAA provides for a right to request restriction (45 CFR § 164.522). Under this HIPAA provision, an individual has the right to request restrictions on how his or her protected health information for treatment, payment, or health care operations is used or disclosed. However, under HIPAA health care insurance carriers do not have to agree with the individual’s request. Conversely, the new Massachusetts law provides that carriers “shall not specify or describe sensitive health care services in a common summary of payments form.” The Division of Insurance will define “sensitive health care services.” In determining that definition, the law requires the Division of Insurance to “consider the recommendations of the National Committee on Vital and Health Statistics and similar regulations in other states and shall consult with experts in fields including, but not be limited to, infectious disease, reproductive and sexual health, domestic violence and sexual assault and mental health and substance use disorders.” In addition, if an insured member who is legally authorized to consent to his or her care or the care of others has no liability for payment for a procedure or service, that member may request that the carrier not issue a common summary of payments form for a specific service or procedure. The carrier may request written verification of an oral request, but may not require an explanation of the basis for the request unless otherwise required by law or a court order.

Insurance carriers will be required to communicate the members’ rights to request that medical information be sent to them rather than the policyholder and to suppress the common summary of payments form in plain language and in a clear and conspicuous manner in evidence of coverage documents, member privacy communications and on every common summary of payments form. This information also must be conspicuously displayed on the carrier’s member website and online portals for individual members.

The law also requires the Division of Insurance to issue guidance as necessary to implement and enforce the law by July 1, 2019 and to develop and implement a plan to educate providers and consumers regarding the rights of insured members and the responsibilities of carriers to promote compliance with the law by October 1, 2019. Nothing in the new law supersedes any general or special law related to informed consent of minors.

Insurance carriers should consider an immediate review of their systems to determine the best way to implement the requirements of this new Massachusetts law.

Oregon Enacts Tougher Data Breach Notification Law

Oregon Governor Kate Brown signed a bill last month toughening the state’s already stringent data breach notification law, which will take effect on June 2, 2018.  The most significant change for companies to be aware of is the requirement that affected consumers be notified no later than 45 days following discovery of a breach.  Additionally, if a company offers free credit monitoring or identity theft protection services to the affected consumers, the company may not require the consumers to provide a credit or debit card number in order to receive such services.

Originally passed in 2007, and amended in 2015, the Oregon Consumer Identity Theft Protection Act (codified as ORS § 654A.600 to 654A.628) already requires companies to notify affected consumers “in the most expeditious manner possible, without unreasonable delay.”  Further, if the number of affected consumers is greater than 250, the company must notify the Attorney General, and the breach will be published on the Oregon Department of Justice website.

Other key changes in the 2018 amendment to the Oregon Consumer Identity Theft Protection Act include:

  • The law now applies to any person or organization that “owns, licenses, or otherwise possesses personal information” (where previously it only applied to a those that “own or license personal information”).
  • The duty to report is now triggered if a company receives notice of a breach from a third-party contractor that maintains such information on behalf of the company.
  • The definition of “personal information” under the law is expanded to include any “information or combination of information that a person reasonably knows or should know would permit access to the consumer’s financial account.”

Additionally, when the 2018 bill takes effect in June, Oregon will join a growing number of states that have prohibited credit reporting agencies from charging a fee to consumers for placing, temporarily lifting, or removing a security freeze on their credit reports—regardless of whether the consumer was a victim of identity theft.

Finally, the bill also amends ORS § 654A.622, which contains the Act’s information security and safeguard requirements.  The requirements now apply to any person or organization that “has control over or access to” personal information, in addition to those that “own, maintain, or otherwise possess” such information.  The language in subsection (2)(d)—listing the administrative, technical, and physical safeguards that should comprise an organization’s information security program—was also thoroughly revised.  Notable changes include:

  • Administrative safeguards, including identification of potential risks and training of key employees, must be performed with “reasonable regularity.”
  • Technical safeguards must now include assessment of “vulnerabilities” in addition to “risks,” and security updates or patches must be implemented when risks or vulnerabilities are identified.
  • Physical safeguards must be assessed “in light of current technology,” and intrusions must be “monitored” and “isolated” in addition to the previous requirement that they be “detected,” “prevented,” and “responded to.”

As more and more states are amending their data breach notification laws (or even enacting such laws for the first time), organizations of all sizes are encouraged to regularly review and amend their data safeguarding programs (including training programs and incident response processes) to ensure compliance with the various state laws.

New FTC Report Makes Security Recommendations to the Mobile Device Industry

Securing data held by mobile devices is largely reliant upon technology, and a recent report by the Federal Trade Commission (“FTC”) takes aim at how that technology can be both improved and better utilized. The report, published in February 2018 and titled, Mobile Security Updates: Understanding the Issues, presents findings based upon information requested by the FTC in 2016 of eight mobile device manufacturers: Apple, Inc., Blackberry Corp., Google, Inc., HTC America, Inc., LG Electronics USA, Inc., Microsoft Corp., Motorola Mobility, LLC, and Samsung Electronics America, Inc.

Generally speaking, the FTC in the report recommended that both the devices themselves as well as their corresponding support services need to do a better job of addressing consumers’ security concerns. Security updates need to be deployed quicker and more frequently, but consumers also need to know when – and when they are not – covered by services providing these updates. The report further recommends that manufacturers provide a minimum period during which security updates are to be provided, and make that period known to the consumer prior to purchase. The report found that some manufacturers do in fact provide substantial security support, but little to no information is provided on the topic prior to purchase. It was also recommended that manufacturers consider providing security updates that are separate and distinct from other updates that are often bundled together in one package.

Providing security support services by way of software updates is only valuable, however, so long as consumers take advantage of them. To this point, the report recommended that government, industry and advocacy groups work together to educate consumers as to the importance of installing security updates as they become available. It was further recommended that manufacturers improve record keeping as pertains to update decisions, support length, update frequency, and the rate at which consumers bother to download and install the updates, all with the goal of improving upon past practices.

Takeaway for Small Businesses

The FTC’s mobile security report is intended to bolster consumer protection, however it is also relevant for small businesses and their use of mobile devices in the workplace. Many small businesses do not have the resources to implement their own mobile security measures, and thus rely heavily on the mobile device manufactures to ensure a certain level of security. Moreover, small businesses often allow for a bring-your-own-device (BYOD) policy, which permits employees to bring and use personally owned devices in the workplace. While a BYOD policy helps a small business save on device and carrier costs, it also increases the likelihood of security threats to the business.

Although small businesses should not rely entirely on the security measures provided by mobile device manufactures, improved security updates and support services as recommended by the FTC’s report will certainly be beneficial to small businesses that do not have resources to invest in security measures. That said, just as the FTC advises consumers to take of advantage of the security software updates, it is imperative that small businesses, particularly with a BYOD policy, act prudently with respect to mobile device security measures available to them by the manufactures. For more information on BYOD key issues and policy considerations, visit Jackson Lewis’s “Bring Your Own Device” BYOD Issues Outline. Mobile device manufacturers are in a constant race to stay ahead of those seeking to expose vulnerabilities. Issuing frequent updates is crucial for security, but ultimately, it is just as important that consumers and businesses that rely heavily on mobile device manufacturer securities measures, understand their role in the process.

“Your Own Cybersecurity Is Not Enough”: NJ Physician Practice Fined Over $400,000 for Data Breach Caused By Vendor

Last week, New Jersey Attorney General Gurbir S. Grewal and the New Jersey Division of Consumer Affairs (“Division”) announced that a physician group affiliated with more than 50 South Jersey medical and surgical practices agreed to pay $417,816 and improve data security practices to settle allegations it failed to properly protect the privacy of more than 1,650 patients whose medical records were made viewable on the internet as a result of a server misconfiguration by a private vendor.

Sharon M. Joyce, Acting Director of the Division, warns HIPAA covered entities:

[Y]our own cybersecurity is not enough.  You must fully vet your vendors for their security as well.

One of the significant changes made by the Health Information Technology for Economic and Clinical Health (HITECH) Act is that state Attorneys General were given authority to enforce the privacy and security regulations under the Health Insurance Portability and Accountability Act (HIPAA). Accordingly, covered entities and business associates should remember that the federal Office for Civil Rights is not the only game in town when it comes to investigating data breaches and imposing fines when HIPAA violations are found. New Jersey is not the only state that has used this authority.

In this case, according to the NJ Office of Attorney General, the physician practice used a third party vendor to transcribe dictations of medical notes, letters, and reports by doctors, a popular service provided to many physical practices and other medical providers across the country. When the vendor, a HIPAA business associate, attempted to update software on a password-protected File Transfer Protocol website (“FTP Site”) where the transcribed documents were kept, it unintentionally misconfigured the web server, allowing the FTP Site to be accessed without a password. As a result, anyone who searched Google using search terms that happened to be contained within the dictation information would have been able to access and download the documents located on the FTP Site. These documents would have included doctor names, patient names, and treatment information concerning patients.

Following notification of the breach, the Division investigated and found HIPAA violations beyond the vendor’s security incident. The Division identified violations of HIPAA’s privacy and security regulations by the physician practice, including:

  • Failing to have a security awareness and training program for its workforce members, including management.
  • Delayed response to the incident and mitigation.
  • Failing to create and maintain retrievable exact copies of ePHI maintained on the FTP site.
  • Failing to maintain a written or electronic log of the number of times the FTP Site was accessed.

There are at least three important lessons from this case for physical practices in New Jersey and in other states:

  1. The New Jersey Office of Attorney General and the Division of Consumer Affairs, and Attorneys General in other states, are ready, willing and able to enforce the HIPAA privacy and security regulations.
  2. While investigating data breaches, federal and state officials are concerned about more than the breaches themselves. They will investigate the state of the covered entity’s privacy and security compliance prior to the breach. Accordingly, covered entities should not wait to experience a data breach before tightening up their privacy and security compliance programs.
  3. HIPAA covered entities need to identify their business associates and take steps to be sure they are complying with the HIPAA security regulations. Business associates can be the weakest link in a covered entity’s compliance efforts.

Alabama Becomes the Final State to Enact a Data Breach Notification Law

On March 28th, Alabama Governor Kay Ivey (R) signed into law the Alabama Data Breach Notification Act, Act No. 2018-396, making Alabama the final state to enact a data breach notification law. South Dakota Governor Dennis Daugaard signed into a law a similar statute one-week prior. The Alabama law will take effect May 1, 2018. Being the last state to enact a breach notification law, Alabama had the benefit of examining the approach in just about all of the other states and apparently drew provisions from many other state laws, including relatively detailed requirements for covered entities (as defined within the statute) and their third-party service providers to maintain reasonable requirements to protect “sensitive personally identifying information.”

Breach Notification Requirements

The Alabama Data Breach Notification Act requires covered entities to notify any Alabama resident whose sensitive personally identifying information was, or the covered entity “reasonably believes,” to have been acquired by an unauthorized person as a result of a data breach that is reasonably likely to cause substantial harm to the individual to whom the information relates.

Similar to South Dakota and recent amendments to other state data breach notification laws, the Alabama law includes an expansive definition of personal information. Notably, however, “biometric information” is not included in Alabama’s definition of personal information, as has been a typical inclusion for other states of late.

Personal information or “sensitive personally identifying information” as it is called by the Alabama law, is defined as an Alabama resident’s first name or first initial and last name in combination with one or more of the following with respect to the same Alabama resident:

  • A non-truncated social security number or tax identification number;
  • A non-truncated driver’s license number, state-issued identification card number, passport number, military identification number, or other unique identification number issued on a government document used to verify the identity of a specific individual;
  • A financial account number, including a bank account number, credit card number, or debit card number, in combination with any security code, access code, password, expiration date, or PIN, that is necessary to access the financial account or to conduct a transaction that will credit or debit the financial account;
  • Any information regarding an individual’s medical history, mental or physical condition, or medical treatment diagnosis by a health care professional;
  • An individual’s health insurance policy number or subscriber identification number and any unique identifier used by a health insurer to identify the individual;
  • A user name or email address, in combination with a password or security question and answer that would permit access to an online account affiliated with the covered entity that is reasonably likely to contain or is used to obtain sensitive personally identifying information.

The law requires a covered entity that experiences a data breach to notify affected Alabama residents “as expeditiously as possible and without unreasonable delay,” taking into account a reasonable time to conduct an appropriate investigation, but not later than 45 days from the determination that a breach has occurred and is reasonably likely to cause substantial harm, with certain exceptions. Notably, if a covered entity’s third party agent experiences a breach of security in the agent’s system, the agent shall notify the covered entity as expeditiously as possible and without unreasonable delay, but no later than 10 days following the determination of the breach or reason to believe the breach occurred. Covered entities should be reviewing their services agreements with third party vendors to ensure they are consistent with these requirements.

In addition, if more than 1,000 state residents are impacted by the breach, the state attorney general and consumer reporting agencies must be notified. Following a number of other states, the Alabama law also sets forth specific content requirements for the notices to individuals and the Attorney General. For example, if notification to the Attorney General is required, it must include (i) a summary of events surrounding the breach, (ii) the approximate number of individuals in the Alabama affected by the breach, (iii) information about any services, such as ID theft prevention or monitoring services, being offered or scheduled to be offered, without charge, to individuals and instructions on how to use the services, and (iv) contact information for the covered entity or its agent.

Reasonable Safeguard Requirements

The Alabama law also imposes a reasonable security requirement for covered entities and their third party vendors. Under the law covered entities and third parties are required implement and maintain reasonable security measures to protect sensitive personally identifying information (see definition above) against a breach of security. This provision is significant not only because it reaches third party agents as well as covered entities, but also because of the scope of the information to which it applies. For example, the similar requirement under often cited Massachusetts regulations currently does not apply to medical information; the Alabama reasonable safeguard requirement appears to reach this category of personal information.

Security measures include:

  • Designation of an employee(s) to coordinate the reasonable security measures;
  • Identification of internal and external risks of a breach of security;
  • Adoption of appropriate information safeguards to address identified risks of a breach of security and assess the effectiveness of such safeguards;
  • Retention of service providers, if any, that are contractually required to maintain appropriate safeguards;
  • Keeping management of a covered entity, including its board of directors, appropriately informed of the overall status of its security measures;

Notably, the law also requires covered entities to conduct an assessment of its security based upon the entity’s security measures as a whole and placing an emphasis on data security failures that are multiple or systemic, including consideration of all the following:

  • The size of the covered entity.
  • The amount of sensitive personally identifying information and the type of activities for which the sensitive personally identifying information is accessed, acquired, maintained, stored, utilized, or communicated by, or on behalf of, the covered entity.
  • The covered entity’s cost to implement and maintain the security measures to protect against a breach of security relative to its resources.

Enforcement

A violation of the Alabama Data Breach Notification Act is also considered a violation of the Alabama Deceptive Trade Practices Act, however criminal penalties are not available. The Office of the Attorney General maintains the exclusive authority to bring an action for civil penalties – there is no private right of action. Failure to comply with the Alabama law could result in fines of up to $5,000 per day, with a cap of $500,000 per breach. Of note, such penalties are reserved for failure to comply with the law’s notification requirements, and it is not clear to what extent such penalties would apply for failure to comply with the law’s reasonable security requirements.

As each state now has a data breach notification law, and many states continue to amend those laws, it is imperative for companies operating in multiple states and/or maintain personal information about residents of multiple states to be aware of the requirements across several jurisdictions. Companies should regularly review and update the measures they are taking to better secure the data they hold and appropriately response to any potential data incident.

South Dakota: The 49th State to Enact a Data Breach Notification Law

It’s official! Alabama is the only remaining state lacking a data breach notification statute. On March 21, 2018 South Dakota Attorney General Marty Jackley announced that Governor Dennis Daugaard signed into law the state’s first data breach notification law, after unanimous approval by both chambers of the state legislature a couple weeks prior. The law will take effect July 1, 2018.

 South Dakota’s new law creates a breach notification requirement for any person or business conducting business in South Dakota that owns or retains computerized personal or protected information of South Dakota residents. On trend with recent amendments to other state data breach notification laws, the South Dakota law includes an expansive definition of personal information.

The law defines personal information as a person’s first name or first initial and last name in combination with any one or more of the following data elements:

  • Social Security Number;
  • driver’s license number or other unique identification number created or collected by a government body;
  • account, credit card or debit card number, in combination with any required security code, access code, password, routing number, PIN or any additional information that would permit access to a person’s financial account;
  • health information; and
  • an identification number assigned to a person by the person’s employer in combination with any required security code, access code, password, or biometric data generated from measurements or analysis of human body characteristics for authentication purposes.

In addition, protected information is defined as:

  • a username or email address in combination with a password, security question answer, or other information that permits access to an online account; and
  • account number or credit or debit card number, in combination with any required security code, access code, or password that permits access to a person’s financial account.
  • NOTE: “protected information” does not include a person’s name.

The law requires an information holder to disclose a breach to any South Dakota resident whose personal or protected information was, or is reasonably believed to have been, acquired by an unauthorized person. This disclosure must be made within 60 days from the discovery or notification of the breach, unless a longer period of time is required due to the legitimate needs of law enforcement.

Further, breaches affecting more than 250 South Dakota residents must be reported to the state’s Attorney General. Note that if the information holder reasonably believes the breach will not likely result in harm to the affected person, the information holder is not required to make a disclosure so long as the information holder first conducts an appropriate investigation and provides notice to the attorney general. This determination needs to be documented in writing and maintained for at least three years.

The South Dakota law makes each failure to disclose a breach an unfair or deceptive practice under South Dakota’s Deceptive Trade Practices And Consumer Protection law, which imposes criminal penalties for violations. In addition, the law authorizes the state Attorney General to impose a civil penalty of up to $10,000 per day per violation and to recover attorneys’ fees and costs associated with an action brought against the information holder.

A string of large-scale breaches made clear that additional protections for South Dakota consumers were needed. Alabama is now the only state without a data breach notification law, but that will likely change in the coming weeks. A house-amended version of Senate Bill 318, the Alabama Data Breach Notification Act sponsored by Senator Arthur Orr (R-Decatur), passed the House of Representatives unanimously on March 22nd, but requires concurrence from the Senate before being sent to the Alabama governor for signing.

 

4 Resources That Make GDPR Compliance Less Painful

The deadline to comply with the GDPR’s complex and far ranging requirements is rapidly approaching.  As your organization races to implement its compliance program before the May 25, 2018 effective date, questions and concerns are likely to arise.  While there is no shortage of online guidance on the GDPR, finding answers to your specific questions and concerns, and assuring those answers come from credible sources, can be daunting.  But we’re here to help.  Below are four resources that make the GDPR more accessible, thereby enabling you to more efficiently and effectively decipher your organization’s obligations.

    1. EUGDPR.org is a good place to start your search. The site answers FAQs about the GDPR in general, how to prepare to meet its requirements, and whether your organization is subject to the GDPR’s mandates. It also summarizes the articles contained in the GDPR and, for those seeking motivation, provides a down-to-the-second Time Until GDPR Enforcement countdown clock.
    2. GDPR Regulations & Recitals. Though they are available elsewhere, this site lays out the regulations and recitals in a very user-friendly format.
    3.  Article 29 Working Party (“WP29”) Guidance. WP29 is an advisory group made up of representatives from EU data protection authorities and the European Commission. It has authored guidance on a number of key GDPR topics, including data portability, data protection officers, lead supervisory authority, data protection impact assessments, personal data breach notifications, automated decision-making and profiling, administrative fines, consent, and transparency. WP29’s guidance is well worth heeding because the GDPR envisions a key role for WP29’s successor, the European Data Protection Board (“EDPB”), which will replace WP29 when the GDPR takes effect. As discussed in Recital 139, the EDPB will contribute to “the consistent application of” the GDPR and the promotion of “cooperation of [its] supervisory authorities” throughout the EU.
    4. Our Blog & Articles. In past posts and articles, we’ve covered important GDPR issues including employee consent, the impact of the GDPR on US organizations with EU employees, and an employee’s right of erasure. We’ll continue to write regularly on GDPR-related topics in coming months.

 

 

 

An Employee’s Right of Erasure Under the GDPR

The implementation of the European Union’s General Data Protection Regulation (GDPR), with an effective date of May 25, 2018, is just around the corner, and with it will come pressure on the human resources (HR) department to update its approach to handling employee data. The GDPR significantly enhances employee rights in respect to control over their personal data.

In particular, the GDPR introduces the concept of a “right of erasure” i.e. a ‘right to be forgotten’. Although the concept currently exists under EU law, it is currently applicable under very limited circumstances, when data processing may result in damage or distress. Under the GDPR, pursuant to Article 17 and Recital 65, an employee will have a right to have his/her data erased and no longer processed, where consent of processing is withdrawn, where the employee objects to such processing, or where processing is no longer necessary for the purpose for which it was gathered. That said, the employer, under certain circumstances, can refuse to comply with an employee’s request for erasure of personal data – where data processing is required by law or in connection with a legal proceeding.

Further, there is a time limit for responding to a request for erasure of data by an employee. An employer will be required to comply with a request by an employee ‘without undue delay’, and not later than one month of receipt of the request, together with the reasons for delay (Article 12).

To effectively meet the GDPR’s new requirements, employers will need to take stock of the employee data they process related to EU operations (see Does the GDPR Apply to Your U.S.-based Company?). What categories of EU employee data are processed? What categories of EU employee data are processed? Where does it comes from? In what context and where is it processed and maintained? Who has access to it? Are the uses and disclosures being made of that information permitted? What rights do EU employees have with respect to that information? The answers to these questions are not always self-evident. Employee data may cover current, former, or prospective EU employees as well as interns and volunteers. It may come from assorted places and be processed in less traditional contexts.

To better understand how an employee’s “right of erasure” will impact day-to-day HR operations, below are a few practical examples of instances where an employee will have the right, under the GDPR, to request that his/her data be erased and no longer processed.

Circumstances where an HR department may be compelled to erase employee data:

  • You collected the data during the employee’s hiring process, but, following the completion of that process, you can no longer demonstrate compelling grounds for continuing to process it.  Such data could include, inter alia: (i) past employment verifications, (ii) education and credential verifications, (iii) credit reporting and other financial history data, (iv) government identification numbers.
  • You collected data about an employee in order to administer benefits to him or her, but the employee has since de-enrolled from the benefits program.
  • You collected employee online monitoring data for work productivity purposes – but you collected data which the employee does not expect is reasonable processing (personal emails, personal messenger conversations, etc.).
  • You collected employee data (g., profiling data) for use in evaluating whether to promote an employee to Position X, but end up promoting another employee to that position instead.
  • You processed data related to employee job performance issues (g., late arrivals, absences, disputes with a coworker, etc.) a number of years ago, and the employee has not had similar issues since.
  • You collected identifying data on an employee such as an employee’s past address, phone number, email address, username, financial account information, etc., but the employee has since provided updated information.

Employers must be ready to comply with the new EU data regime upon its effective date next month. If your organization has not yet started, it should begin implementing policies and procedures that inform employees of their enhanced rights to control over their personal data, ensure that operationally the organization can comply with such rights, and train HR personnel handling employee requests for erasure of data. This includes developing a plan of how to respond timely and effective to employees’ requests, and a review process for when there is a legal basis to deny a request.

LexBlog