UPDATE: The changes to the Massachusetts data breach notification law described below are now in effect. Thus, if you have discovered a data incident involving the personal information of Massachusetts residents you will want to review these changes carefully – they are significant and the Commonwealth is intent on educating the public about them. Because we have coached many clients through data breaches affecting Massachusetts residents, we recently received a letter from the Massachusetts Office of Consumer Affairs and Business Regulation (OCABR) alerting us of these changes. The same office published a set of FAQs about the law changes which emphasize some key points. For example, as discussed below, the new law expanded the content requirements for notifications to the Attorney General and OCABR to include, among other things, whether the business that experienced the breach maintains a written information security program (WISP) and whether they have updated the WISP. Businesses maintaining personal information of Massachusetts residents should revisit their incident response plan (or develop one).

Observers of the recent changes in the Massachusetts data breach notification law likely will focus on the addition of the obligation to provide 18 months of credit monitoring following a breach involving Social Security numbers (42 months, if the breached entity is a consumer reporting agency). This certainly is a significant change, making Massachusetts only the fourth state to have enacted a similar mandate (See also, California, Connecticut, and Delaware). However, other changes are perhaps much more significant for an organization that has a breach triggering the updated Massachusetts law, which becomes effective April 10, 2019.

Data security and breach notification legislative developments are off to a running start in 2019. On January 1, 2019, Vermont began regulating data brokers and South Carolina’s adoption of the National Association of Insurance Commissioners’ (NAIC’s) Insurance Data Security Model Law became effective adding significant breach notification and information security requirements for entities licensed by state insurance regulators, including insurers and agents. The North Carolina Attorney General announced a proposal to make significant changes to that state’s notification law, among them requiring notification for ransomware attacks. The trend continues in Massachusetts, where last week Gov. Charlie Baker signed legislation substantially updating the state’s breach notification law.

Here is an overview of some of key changes:

Organizations that experience a breach must report to the Attorney General and the Office of Consumer Affairs and Business Regulation whether they have a written information security program (WISP). Nearly ten years ago, Massachusetts enacted one of the most comprehensive set of data security regulations affecting certain organizations in the state. (Read more about that and get a compliance checklist here.) Organizations that have not adopted a WISP will have to inform the government that they have not done so, which likely will lead to a follow up inquiry concerning compliance and potentially significant penalties. But that is not all, they also will have to report information such as the type of personal information involved in the incident (e.g., social security number, driver’s license number), steps the organization has taken or plans to take relating to the incident, including updating the WISP, and a certification that they have offered compliant credit monitoring services, if applicable.

Parent companies may have to answer for breaches by subsidiaries. Organizations that must report a breach under the new law and that are owned by another person or corporation, must inform affected residents of the name of the parent or affiliated corporation. This provision is sure to create some confusion. For example, there is no level of ownership that is needed to be listed in the notice to affected residents. Additionally, because a breached entity might be owned by a few different entities, it is unclear if all of those entities would have to be listed. Clearly, this provision may create some unfavorable publicity for organizations whose subsidiaries experience a breach. As such, it might spur them to be more actively involved with the date security compliance and breach response efforts of their subsidiary and affiliated entities. Parents and affiliated companies may also want to revisit their cyber insurance policies to assess coverage for losses that may arise out of a subsidiary’s breach. For the breached subsidiary, this provision may result in them involving their parent companies sooner and more extensively in the breach response process.

Once an organization knows about a breach affecting a Massachusetts resident, it must notify the resident as soon as practicable and without unreasonable delay, and cannot wait to determine the total number of residents affected by the incident. Security incident investigations sometimes take time and it is not uncommon during those investigations for the number affected persons to grow as the investigation continues. With this change, businesses need to notify continually, and not wait for the investigation to conclude before sending notification. Additionally, because state agency notifications must include the number of affected persons, business will need to keep these agencies apprised of the growing number of residents affected.

The Office of Consumer Affairs and Business Regulation will be reporting about your breach on its website. When an organization reports a breach to the Office of Consumer Affairs and Business Regulation (OCABR), under the new law OCABR must post on its website copies of the sample notice sent to affected residents within 1 business day of receipt and continually update the site with information learned from the investigation. OCABR also will be helping affected residents file public records requests to obtain the notices that organizations that experienced the breach have filed with the Attorney General and OCABR.

A number of the updates to the Massachusetts data breach notification law are not the typical changes we see made in many other states – e.g., expanding the definition of personal information, establishing a set number of days by which notice must be provided. Some of the changes seem intent on drawing attention to organizations that had a breach and their related companies (posting of OCABR website, helping affected residents get more information about the breach, requiring the name of parent companies be listed in the notice, etc.) and pushing for greater enforcement of data security safeguards (requiring reporting on whether a WISP is maintained). Organizations will need to revisit their overall incident response plans, as well as confirm their compliance with the state’s data security mandate, now nine years old.

As we reported, in late February, California Attorney General Xavier Becerra and Senator Hannah-Beth Jackson introduced Senate Bill 561, legislation intended to strengthen and clarify the California Consumer Privacy Act (CCPA). This week, the Senate Judiciary Committee referred the bill to the Senate Appropriations Committee by a vote of 6-2. This move came despite concerns raised about the scope of the amendment’s expanded private right of action. It is worth noting that a restricted private right of action is believed to have been fundamental to the compromise that led to the CCPA becoming law.

If SB 561 becomes law, it would make a number of significant changes to the current law. In particular, SB 561 would significantly expand the scope of the private right of action presently written into the CCPA. In its current form, the CCPA provides consumers a private right of action if their nonencrypted or nonredacted personal information is subject to an unauthorized access, exfiltration, theft, or disclosure because the covered business did not meet its duty to implement and maintain reasonable safeguards to protect that information. The amendment proposed under SB 561 broadens this provision to grant consumers a private right of action if their rights under the CCPA are violated.

This could become very costly for businesses subject to CCPA. A plaintiff suing under CCPA can recover statutory damages in an amount not less than $100 and not greater than $750 per incident or actual damages, whichever is greater, as well as injunctive or declaratory relief and any other relief the court deems proper. With the change under SB 561, violations of rights under the statute, such as rights to certain notifications or the right to have certain information deleted upon request potentially could trigger statutory damages,

A similar cause of action exists under an Illinois privacy law that you might have heard about, the Illinois Biometric Information Privacy Act or “BIPA.” That provision has resulted in a flood of litigation, including putative class actions, seeking to recover statutory damages for plaintiffs who allege their biometric information has been collected and/or disclosed in violation of the statute.

According to reports, while Senator Jackson promised to work with stakeholders to address concerns about an expanded private right of action, the lawmaker apparently is intent on maintaining the ability for consumers whose CCPA privacy rights are violated to sue, without having to rely on the Attorney General’s office to enforce the CCPA.

Pending legislation could create new consumer privacy rights in Massachusetts. Earlier this year, Senator Cynthia Creem presented An Act Relative to Consumer Data Privacy in the Massachusetts Senate. This Consumer Privacy Bill, SD.341, combines key aspects of the California Consumer Privacy Act (CCPA) and Illinois’s Biometric Information Privacy Act (BIPA). This bill would allow Massachusetts consumers a private right of action if their personal information or biometric information (referred to separately in the bill) is improperly collected.

The Consumer Privacy Bill defines “biometric information” as an individual’s physiological, biological or behavioral characteristics, including an individual’s DNA, that can be used, singly or in combination with each other or with other identifying data, to establish individual identity. Biometric information includes, but is not limited to, imagery of the iris, retina, fingerprint, face, hand, palm, vein patterns, and voice recordings, from which an identifier template, such as a faceprint, a minutiae template, or a voiceprint, can be extracted, and keystroke patterns or rhythms, gait patterns or rhythms, and sleep, health, or exercise data that contain identifying information.

The bill defines “personal information” as any information relating to an identified or identifiable consumer. “Personal information” means information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or the consumer’s device.

However, this definition does not include publicly available information or consumer information that is deidentified or aggregate consumer information. Moreover, the bill creates an exception for a business collecting or disclosing personal information of the business’s employees so long as the business is collecting or disclosing such information within the scope of its role as an employer. Therefore unlike California’s CCPA, where the application to employee data remains an open question, under the current text of the Massachusetts bill it is pretty clear that the law would not apply to employee data as defined above. That said, it is still early in the legislative process and the bill could be revised to include employee data.

The pending legislation would require businesses collecting a Massachusetts consumer’s personal information to notify the consumer of the following rights before the point of collection:

(1) The categories of personal information it will collect about that consumer;

(2) The business purposes for which the categories of personal information shall be used;

(3) The categories of third parties with whom the business discloses personal information;

(4) The business purpose for third party disclosure; and

(5) The consumer’s rights to request:

                  (A) A copy of the consumer’s personal information;

                  (B) The deletion of the consumer’s personal information; and

                  (C) Opt-out of third party disclosure.

In addition to this notice requirement, the bill would give consumers a statutory right to request that businesses collecting their personal information disclose to the consumer:

(1) The specific pieces of personal information the business has collected about that consumer;

(2) The sources from which the consumer’s personal information was collected;

(3) The names of third parties to whom the business disclosed the consumer’s personal information; and

(4) The business purpose for third party disclosure.

Businesses would have to make available to consumers two or more designated methods for submitting consumer verified requests for personal information, including, if the business maintains a web site, a link on the home page of the web site. A business receiving a verifiable consumer request generally must provide the requested information within 45 days of receiving the request, but may extend that period once by an additional 45 days, so long as the request for the extension is provided within the first 45-day period. The proposed legislation also creates a consumer right to request that a business delete any personal information collected from the consumer, and the right to opt out of third party disclosure at any time.

The legislation would be enforceable both through a private right of action and by the Massachusetts Attorney General. A consumer could recover damages in an amount not greater than $750 per consumer per incident or actual damages, whichever is greater (for any violation of the act); (2) injunctive or declaratory relief, and (3) reasonable attorney fees and costs. The Attorney General would be authorized to obtain a temporary restraining order or preliminary or permanent injunction against a violation of the Act. In addition, the Attorney General may seek a civil penalty of not more than $2,500 for each violation or $7,500 for each intentional violation.

This Consumer Privacy Bill would impose administrative burdens on businesses, including an obligation to train employees, as well as creating new exposure to damages and penalties. Given the litigation we are seeing under BIPA, businesses collecting Massachusetts consumers’ personal information should monitor the progress of this legislation to determine whether they should begin preparations for complying with yet another consumer privacy provision.

 

Image result for cardboard box record storageAs reported by CBC, B.C. Pension Corporation announced a data breach involving pension plan records after discovering a box containing microfiche could not be found following a recent office move. The box contained personal information (names, social insurance numbers and dates of birth) on approximately 8,000 pension plan participants. The company employed those participants during the period 1982 to 1997. Learning of this incident, persons responsible for pension plan administration might be wondering how secure are their facilities (or their service provider’s facilities) for remote storage. And, pension plan participants might be wondering why do plans need this information and for so long.

In the U.S., the Employee Retirement Income Security Act (ERISA) governs the administration of pension plans, and the law includes specific record retention requirements. For example, persons who are responsible for filing plan reports must “maintain records to provide sufficient detail to verify, explain, clarify and check for accuracy and completeness.” ERISA Section 107. In addition, ERISA requires employers to maintain sufficient records to determine benefits due to employees. ERISA Section 209. Because employees may not retire for many years after accruing benefits under the pension plan, plans need to maintain records until plan participants retire and the records must be sufficient to determine benefits under the plan.

These record retention requirements present important issues for employers, plan administrators, and pension plan service providers. We have written about pension plans experiencing data breaches caused by malicious attackers. But, relatively straightforward administrative recordkeeping activities also can result personal information being compromised.  In late 2016, the ERISA Advisory Council, a 15-member body appointed by the Secretary of Labor to provide guidance on employee benefit plans, shared with the federal Department of Labor some considerations concerning cybersecurity. To date, the DOL has not issued any formal guidance on these recommendations, however, employers, plan administrators, and pension plan service providers should revisit their procedures for handling sensitive personal information maintained in their pension plan records.

According to the Council’s recommendations, there are four major areas for effective practices and policies: (i) data management; (ii) technology management; (iii) service provider management; and (iv) people issues. This is a good list to work from. However, while not an exhaustive list, the following action items may help to avoid incidents like the one discussed above:

  • Retain only the data that is needed; if certain data elements can be redacted, removed them;
  • Maintain an inventory of records that are retained regardless of format, and where to find them;
  • Outline a clear process for moving records, and track location and inventory during the move; and
  • Delete records that are no longer needed; confirm service providers have done so, as applicable.

Of course, no set of safeguards for protecting personal information will prevent all kinds of compromises to it. Mistakes happen, so employers and plan administrators should be prepared by developing and maintaining incident response plans and practice them.

UPDATE: As discussed below, SB2134, as introduced, would have amended BIPA to delete the language that creates a private right of action and provide, instead, that violations resulting from the collection of biometric information by an employer for employment, human resources, fraud prevention, or security purposes would be subject to the enforcement authority of the Department of Labor. But, to survive, SB 2134 needed to be reported out of committee by March 28, 2019. That did not happen. Again, businesses should continue their efforts to comply with the requirements of BIPA.

Many businesses currently are defending a wave of class action lawsuits filed under the Illinois’ Biometric Information Privacy Act, popularly known as “BIPA” ).  The floodgates to litigation were opened earlier this year when the Illinois Supreme Court ruled that individuals need not allege actual injury or adverse effect, beyond a violation of his/her rights under BIPA, in order to qualify as an “aggrieved” person and be entitled to seek liquidated damages, attorneys’ fees and costs, and injunctive relief under the Act.  Potential damages are substantial as the BIPA provides for statutory damages of $1,000 per negligent violation or $5,000 per intentional or reckless violation of the Act. The majority of BIPA suits have been brought as class actions seeking statutory damages on behalf of each individual affected, exposing businesses to potentially crushing damages.

In February, SB2134 was introduced and would amend BIPA to delete the language that creates a private right of action. If enacted, the amendment would provide, instead, that violations resulting from the collection of biometric information by an employer for employment, human resources, fraud prevention, or security purposes would be subject to the enforcement authority of the Department of Labor. The amendment would permit employees and former employees to file a complaint with the DOL, provided they are filed within one year from the date of the violation. Violations of BIPA that constitute a violation of the Consumer Fraud and Deceptive Business Practices Act would be enforced by the Attorney General. If the amendment is enacted, the changes would be effective immediately. Of course, it is unclear what the effect would be for pending litigation.

We expect businesses will be watching developments concerning SB2134 closely, which is currently is in committee. However, businesses should continue their efforts to comply with the requirements of BIPA, which do not appear to be included in the changes being proposed in SB2134.

As wearable and analytics technology continues to explode, professional sports leagues, such as the NFL, have aggressively pushed into this field. (See Bloomberg). NFL teams insert tiny chips into players shoulder pads to track different metrics of their game. During the 2018-2019 NFL season, data was released that Ezekiel Elliot ran 21.27 miles per hour for a 44-yard run, his fastest of the season. The Dallas Cowboys are not alone as all 32 teams throughout the league can access this chip data which is collected via RFID tracking devices. Sports statistics geeks don’t stand a chance as this technology will track completion rates, double-team percentages, catches over expectation, and a myriad of other data points.

There are obvious questions and concerns about the use of this technology, and not just at the professional level. Wearables can be found at all levels of sports and athletic activities, including at colleges and high schools. At the professional level, the NFL is unique in that it allows teams to use the chip data during contract negotiations. However, players do not have full access to this information, unless specifically granted by individual teams. This is important since there is much debate over who truly owns this data. And, for a variety of reasons, players and athletes want to know where their information is stored, how it is stored, whether and how it might be used and disclosed, who has access to it, and what safeguards are in place to protect it. Major League Baseball and the Players Association added Attachment 56 to the 2017-2021 Collective Bargaining Agreement to address some of these concerns. But, again, these and other questions are not unique to professional ball players.

See the source imageWith devices ranging from wearable monitors to clothing and equipment with embedded sensors, professional teams, colleges and universities, local school districts, and other sports and athletic institutions, as well as the companies that provide the wearables, can now collect massive amounts of data such as an athlete’s heart rate, glucose level, breathing, gait, strain, or fatigue. On the surface, this data may relate to an athlete’s performance and overall wellness, which may be somewhat apparent to onlookers without the aid of the device. However, alone or aggregated, the data may reveal more sensitive personal information relating to the athlete’s identity, location, or health status, information that cannot be obtained just by closely observing the individual. When organizations collect, use, share, or store this data, it creates certain privacy and security risks and numerous international, federal, and state data protection laws may apply. Any sports or athletic organization that develops a wearable device program, or has reason to believe that these devices are being used by coaches and others to collect similar data, should be mindful of these risks and regulatory issues.

Below is a non-exhaustive list of some of these laws: Continue Reading As Wearable Technology Booms, Sports and Athletic Organizations at all Levels Face Privacy Concerns

A few weeks back a company’s watch list containing nearly 2.5 million individuals and entities considered “high-risk” for its clients was mistakenly leaked to the public. A “high-risk” entity in this circumstance was one potentially linked to organized crime or terrorism. The leak resulted from an unsecured and incorrectly configured company database.

Typically in the news we hear of data breaches involving a leak of personal information including social security numbers, medical information or credit card numbers. Moreover, state data breach notifications and reasonable safeguard laws generally create an affirmative obligate to protect against and respond to a data breach involving personal information. For example, under California data security law a business that owns, licenses or maintains personal information must implement and maintain reasonable security procedures and practices appropriate to the nature of the information. Similarly, under New Jersey data breach notification law, any business that conducts business in New Jersey, or any public entity that compiles or maintains computerized records that include personal information, shall disclose any breach of security of those computerized records following discovery or notification of the breach to any customer who is a resident of New Jersey whose personal information was, or is reasonably believed to have been, accessed by an unauthorized person.  The definition of personal information under state data breach notification and reasonable safeguard laws commonly includes the following types of data: (i) Social Security number; (ii) driver’s license number or state issued ID card number; or (iii) account number, credit card number or debit card number combined with any security code, access code, PIN or password needed to access an account. Moreover, some states have broader definitions of personal information which can include other types of data such as biometric data, passport numbers or medical information. Note that this type of data is unlike the information involved in the “watch list” incident mentioned above.

Despite media and legislative focus on data breaches of personal information, there are other types of sensitive data that when breached can have a detrimental impact on an organization. An organization can face a data breach involving leaked confidential business information, trade secrets, organizational strategies or financial information, just to name a few. As a result it is important for an organization to have safeguards in place to protect any data it deems of value, whether personal information or otherwise, even if there is no affirmative obligation under the law to do so. Strong IT safeguards are part of the solution, but not a silver bullet. Administrative and physical safeguards also are needed, such as access management policies, awareness training, equipment inventory, and vendor assessment and management programs. No organization is immune to a data breach, and preparedness can make all the difference in both preventing a breach, and responding if one does occur.

Below are a few of our helpful resources for preventing and responding to a data breach:

 

The California Consumer Privacy Act (CCPA), passed in 2018 and taking effect January 1, 2020, is considered the most expansive state privacy law in the United States, and sparked a flurry of state privacy law legislative proposals, in particular in Washington state. This January, a group of state senators in Washington introduced the Washington Privacy Act, SB 5376 (WPA), slightly updated in late February. On March 6th, the bill passed the Senate with a nearly unanimous vote, and now heads to the House for review. If approved, the WPA will take effect July 31, 2021.

Unlike other states that are modeling their bills largely on the CCPA (e.g. Hawaii, Maryland, New Mexico), the WPA would establish more GDPR-like requirements on businesses that collect personal information related to Washington residents. In fact, the WPA’s legislative findings explicitly state that Washington residents “deserve to enjoy the same level of privacy safeguards”, as those afforded to EU residents under the GDPR. In addition to requirements for notice, and consumer rights such as access, deletion, and rectification, the WPA would impose restrictions on use of automatic profiling and facial recognition.

Below are key aspects of the WPA:

  • Jurisdictional Scope. The WPA would apply to legal entities that conduct business in Washington or produce products or services intentionally targeted to residents of Washington, and that satisfy one or more following thresholds: Controls or processes data of 100,000 consumers or more; or Derives over 50% of gross revenue from the sale of personal information and processes or controls personal information of 25,000 consumers or more. The bill includes exemptions for personal data regulated by HIPAA, HITECH, or the GLBA, and data sets maintained for employment record purposes. Personal data is defined vaguely to include any information relating to an identified or identifiable natural person.
  • Consumer Rights. Washington residents are afforded the power to request that controllers of their personal data:
    • provide them with confirmation whether their personal information is being processed by the controller or sold to a third-party;
    • provide them with a copy of the personal data undergoing process;
    • correct inaccurate personal data;
    • delete their personal data under specified circumstances
      (g. personal data is no longer necessary in relation to the purpose for which it was collected, the processing is for direct marketing purposes, personal data has been unlawfully processed).
  • In general, businesses in the U.S. are used to needing only implied or negative consent from customers with respect to the collection and use of their data. The WPA would require consent to be a “clear affirmative act establishing a freely given, specific, informed, and unambiguous indication of a consumer’s agreement to the processing of personal data relating to the consumer, such as by a written statement or other clear affirmative action.”
  • Controllers and Processors. In general, controllers determine the purposes and means of processing personal data, while processors process personal data on behalf of the controllers. Thus, under the WPA, controllers would be responsible for meeting the requirements of the WPA, while processors are responsible for following the instructions of their controllers and assisting them with meeting the requirements of the law. Contracting between the parties will be critical.
  • Controllers must be transparent and accountable for processing of personal data by making a “meaningful,” “clear,” and “reasonably accessible” privacy notice available (although the language in the bill is less than clear). Notice must include: the categories of personal data collected, the purpose for which personal data is disclosed to third parties, the rights the consumer may exercise, the categories of personal data shared with third-parties, the categories of third-parties with whom the controller shares data.
  • Risk Assessments. Controllers must conduct and document risk assessments covering the processing of personal data prior to the processing of such personal data whenever there is a change in processing that materially impacts the risk to individuals, and on at least an annual basis regardless of changes in processing.
  • A controller in violation of the law is subject to an injunction and liable for a civil penalty of not more than $2,500 for each violation or $7500 for each intentional violation.

 It is worth noting that unlike California’s CCPA which leaves open the possibility of application to employee data, the WPA explicitly states that a protected “consumer” does not include an employee or contractor of a business acting in their role as an employee or contractor. Moreover, as already mentioned above, data sets maintained for employment record purposes are exempt from the jurisdictional scope. That said, the WPA is not yet final, and could be revised during the legislative process to include employee data.

States across the country are contemplating ways to enhance their consumer privacy and security protections. For example, we recently spotlighted New Jersey in two posts (available here and here), detailing several NJ Assembly bills relating to privacy and security, currently under consideration.   Organizations, regardless of their location, should be assessing and reviewing their data collection activities, building robust data protection programs, and investing in written information security programs (WISPs).

 

Yesterday, the U.S. Supreme Court rejected a petition for a writ of certiorari by Zappos requesting the Court to review a Ninth Circuit Court decision which allowed customers affected by a data breach to proceed with a lawsuit on grounds of vulnerability to fraud and identity theft. The ruling stems from a 2012 breach that affected over 24 million Zappos customers, which included hackers accessing customer’s names, account numbers, passwords, email addresses, billing and shipping addresses, phone numbers, and the last four digits of the credit cards.

In March of 2018, the Ninth Circuit Court reversed a decision by the United States District Court for the District of Nevada that tossed claims brought by customers affected by the data breach who claimed that the breach left them in “imminent” risk, because they did not allege having already suffered financial losses. A three-judge Ninth Circuit panel held that sensitivity of the information stolen in the breach — including credit card numbers and other means to commit fraud or theft — led them to conclude the customers had adequately alleged an injury. “Plaintiffs allege that the type of information accessed in the Zappos breach can be used to commit identity theft, including by placing them at higher risk of ‘phishing’ and ‘pharming,’ which are ways for hackers to exploit information they already have to get even more PII,” the panel wrote.

Businesses facing class action litigation following a data breach have long waited for the Supreme Court to weigh in on the issue of whether a demonstration of actual harm is required to have standing to sue. Federal circuit courts over the past few years have struggled with this issue, in large part due to lack of clarity following the U.S. Supreme Court’s decision in Spokeo, Inc. v. Robins which held that even if a statute has been violated, plaintiffs must demonstrate that an “injury-in-fact” has occurred that is both concrete and particularized, but which failed to clarify whether a “risk of future harm” qualifies as such an injury. For example, the 3rd6th, 7th,  9th  and D.C. circuits have generally found standing, while the 1st2nd4th and 8th circuits have generally found no standing where a plaintiff only alleges a heightened “risk of future harm”.

In its appeal to the Supreme Court, Zappos argued that “the factual scenario this case presents – a database holding customers’ personal information is accessed, but virtually no identity theft or fraud results – is an increasingly common one”. The rejection by the Supreme Court of the Zappos petition is considered a setback for companies facing similar litigation. Moreover, the California Consumer Privacy Act, set to take effect in 2020, authorizes a private cause of action against a covered business for damages resulting from a failure to implement appropriate security safeguards which result in a data breach, and the Illinois Supreme Court recently held that actual harm was not required to sue under the Illinois Biometric Information Privacy Law (“BIPA”).  The Supreme Court did not provide a reason for its denial of the Zappos petition, nonetheless its decision coupled with these state initiatives, is likely to have a significant impact on data breach class action lawsuits going forward.

On February 25, 2019, the Third Circuit held that a New Jersey engineering firm that monitored its former employees’ social media accounts was not barred from winning an injunction to prevent four former employees from soliciting firm clients and destroying company information.

In this case, several employees left the engineering firm to start two competing businesses. While still employed with the firm, the employees discussed over social media the possibility of starting a competing venture, and transmitted firm documents and other relevant information outside the firm’s network. After the mass resignation, and loss of a key firm client, the firm’s network administrator was instructed to examine the former employees’ work computers. During this time the administrator allegedly, inter alia, reviewed browser history (including deleted activity), accessed personal social media accounts via passwords saved on the computers and installed software allowing him to monitor social media activity without detection.

The third circuit, in a split three-judge panel opinion, upheld the district court’s July decision, holding that the firm’s monitoring activity did not constitute “inequitable conduct” under the “unclean hands doctrine” to bar the firm from winning its request for injunction. The unclean hands doctrine “applies when a party seeking relief has committed an unconscionable act immediately related to the equity the party seeks in respect to the litigation.” That said, the court emphasized that even if the firm’s monitoring activity did constitute an “unconscionable act”, the conduct was not related to the claim upon which equitable relief was sought. In other words, the court’s decision was not based on whether the firm’s monitoring activity was in fact “unconscionable”, but rather whether it related to their injunction request, leaving the door open for such conduct to be considered “unconscionable” under different circumstances.

Although not mentioned in the opinion, New Jersey has a social media access law that generally prohibits employers from requesting or requiring a current or prospective employee to provide or disclose any user name or password, or in any way provide the employer access to, a personal account. That said, the law includes an exception permitting employers to conduct investigations regarding: work-related employee misconduct based on information about activity on social media; or an employee’s actions based on information about the unauthorized transfer of an employer’s proprietary, confidential, or financial information to social media. Also not mentioned in the opinion, cases under similar circumstances often invoke federal Stored Communications Act (“SCA”) violations. For example, in Pure Power Boot Camp, Inc. v. Warrior Fitness Boot Camp, a New York district court ruled in a non-compete action that accessing former employees’ accounts violated the SCA.

There are many reasons companies monitor employees, including boosting productivity, dissuading cyber-slacking or social “not-working,” protecting trade secrets and confidential business information, preventing theft, avoiding data breaches, avoiding wrongful termination lawsuits, ensuring that employees are not improperly snooping themselves, complying with electronic discovery requirements, and generally dissuading improper behavior.

Excessive, clumsy, or improper employee monitoring, however, can cause significant morale problems and, worse, create potentially legal liability for invasion of privacy under statutory and common law.   Companies should review policies and applicable state and federal law, and tread carefully before embarking on a monitoring program and remember to monitor the monitors.