In honor of Data Privacy Day, we provide the following “Top 10 for 2016.”  While the list is by no means exhaustive, it does provide some hot topics for organizations to consider in 2016.

  1. EU/U.S. Data Transfer (status of Safe Harbor).  On October 6, 2015, the Court of Justice of the European Union (CJEU) ruled in Schrems v. Data Protection Commissioner (Case C-362/14) that the voluntary Safe Harbor Program did not provide adequate protection to the personal data of EU citizens. The Safe Harbor Program was used extensively by organizations that needed to transfer data from the EU to the U.S. Post Schrems U.S. companies have been unclear what to do to transfer data out of the EU in a compliant manner. The ultimate resolution of this issue is one of the most anticipated privacy topics for 2016.
  2. People Analytics including Employee Tracking/Wearables.  The Federal Trade Commission’s January 2016 report discussing “big data” raised a number of issues for organizations concerning the use of data analytics with respect to both consumer data, as well as the application of big data tools in the workplace. People analytics refers generally to a data-driven approach to managing an organization’s human capital, and it is likely to be a significant trend for employers in the months and years ahead. Some of the data to perform the analytics is collected through the devices employees use and wear. For example, as GPS and RFID enabled devices become more prevalent, employers are faced with the difficulty of balancing the workplace risks against the ability to obtain information about employees’ whereabouts which can substantially increase productivity. Similarly, wellness programs seek to incentivize employees (including the members of their household) to live “healthier” lives. Wearable technologies such as FitBit allow for the collection of data which when analyzed can have substantial benefits and help control healthcare costs, but they can also raise privacy and discrimination risks.
  3. Risk Assessment/Written Information Security Program. Many businesses remain unaware of how much personal and confidential information they maintain, who has access to it, how it is used and disclosed, how it is safeguarded, and so on. Getting a handle on a business’ critical information assets must be the first step, and is perhaps the most important step to tackling information risk. It is logically impossible to adequately safeguard something you are not aware exists. In fact, failing to conduct a risk assessment may subject the business to penalties under federal and/or state law. Even if adopting a written information security program (WISP) to protect personal information is not an express statutory or regulatory mandate in your state (as it is in states such as CA, CT, FL, MA, MD, OR, etc.), having one is critical to addressing information risk. Importantly, an organization’s WISP should also address company data outside of the company’s control, such as data or information which is provided to vendors who provide services to an organization. Not only will a WISP better position a company when defending claims related to a data breach, it will also help the company manage and safeguard critical information and potentially avoid a breach from occurring in the first place.
  4. The Telephone Consumer Protection Act (TCPA).  According to statistics compiled by WebRecon LLC, 3,710 TCPA lawsuits were filed in 2015, representing an increase of 45% over 2014. Demonstrating consistency, 2015 marked the 8th year in a row where the number of TCPA suits increased from the preceding year. Tellingly, 23.6% of those suits (877) were filed as putative class actions. With the recent SCOTUS decision in Campbell-Ewald making defense of class actions under the TCPA more difficult, we expect the number of TCPA suits to continue to grow in 2016. Many of these suits are not just aimed at large companies.  Instead, these suits are often focused on small businesses that may unknowingly violate the TCPA.  With statutory damages ranging from $500 to $1500 per violation (e.g. per fax/text sent or call made) these suits often result in potential damages in the hundreds of thousands, if not millions, of dollars.  Understanding the FAQs for the TCPA and taking steps to comply with the TCPA is a great first step as we enter 2016.
  5. Industry Specific Guidance.  Whether it is the U.S. Food and Drug Administration (FDA) or the U.S. Commodity Futures Trading Commission (CFTC), organizations will need to remain vigilant in 2016 to ensure they are addressing industry specific rules or guidance regarding cybersecurity and the safeguarding of the information they maintain.
  6. BYOD/COPE.  Many organizations have adopted policies allowing employees to utilize their own electronic devices in the workplace, and are turning to Bring Your Own Device (“BYOD”) programs but without considering all of the risks and related issues. Some are sticking with Corporate Owned Personally Enabled (“COPE”) programs.  If you are considering BYOD, you should review our comprehensive BYOD issues outline and determine whether BYOD or COPE is the best option for your organization.
  7. Investigating Social Media.  The use of social media continues to grow on a global scale, and the content available on a user’s profile or account is often being sought in connection with litigation and/or employment decisions. While public content may generally be viewed without issue, employers need to be aware of how they are accessing social media content. This is especially true as the list of states protecting legislation to protect social media privacy continues to grow. In a litigation context, if private content is accessed improperly, serious repercussions can follow.
  8. Federal Trade Commission (FTC) & Federal Communications Commission’s (FCC) Enforcement Re: Data Security.  Both the FTC and FCC continued enforcements actions in 2015 in connection with companies’ alleged failure to properly safeguard data. FCC actions resulted in consent decrees which included penalties in the hundreds of thousands of dollars, and mirrored previous consent decrees entered into by the FTC. However, 2015 decisions in cases stemming from the FTC’s actions found the FTC may have difficulty meeting its burden of proving that a company’s alleged unreasonable data security practices caused substantial consumer injury or that any consumer whose personal information was maintained by a company suffered any harm as a result of such alleged conduct. For 2016 it remains to be seen just how far the FCC and FTC will go to continue enforcement actions related to data security. Nevertheless, organizations still need to be conscious of the statements or promises they make concerning their data security practices and implement appropriate safeguards to protect the personal information they maintain.
  9. HIPAA Compliance. The Office for Civil Rights (OCR) stated that in early 2016 it will launch Phase 2 of its audit program measuring compliance with HIPAA’s privacy, security and breach notification requirements by covered entities and business associates. We previously discussed, having the right documents in place can go a long way toward helping an organization survive an OCR HIPAA audit. Now that it appears these audits are coming, it is important that covered entities and business associates invest the time in identifying and closing any HIPAA compliance gaps before an OCR investigator does this for them. This is particularly true as some of the largest HIPAA settlements to date are less about harm, and more focused on compliance.
  10. Develop a Plan for Breach Notification. All state and federal data breach notification requirements currently in effect require notice be provided as soon as possible (with some setting forth specific time periods). Failing to respond appropriately could result in significant liability.  Employers need to be conscious of data breach issues as the leading cause of breaches is employee error. Developing a breach response plan is not only prudent but also may be required under federal or state law.  A proactive approach is often the simplest and cheapest way to avoid liability.

Be Vigilant and Watch for New Legislation. Managing data and ensuring its privacy, security and integrity is critical for businesses and individuals, and is increasingly becoming the subject of broad, complex regulation. As such, companies are left to navigate the constantly evolving web of growing state legislation and/or industry guidance. Organizations therefore need to be vigilant in order to remain compliant and competitive in this regard.

Last week, the U.S. Food and Drug Administration (FDA) issued draft guidance outlining important steps medical device manufacturers should take to address cybersecurity risks to keep patients safe and better protect the public health. The draft guidance, which details the agency’s recommendations for monitoring, identifying, and addressing cybersecurity vulnerabilities in medical devices after they have entered the market, is part of the FDA’s ongoing efforts to ensure the safety and effectiveness of medical devices in the face of potential cyber threats.

The FDA has identified cybersecurity threats to medical devices as a growing concern. While manufacturers can incorporate controls in the design of a product to help prevent these risks, it is essential that manufacturers also consider improvements during maintenance of devices, as the evolving nature of cyber threats means risks may arise throughout a device’s entire lifecycle.

Commenting on the guidance, Suzanne Schwartz, M.D., M.B.A., Associate Director for Science and Strategic Partnerships and Acting Director of Emergency Preparedness/Operations and Medical Countermeasures in the FDA’s Center for Devices and Radiological Health said,

All medical devices that use software and are connected to hospital and health care organizations’ networks have vulnerabilities—some we can proactively protect against, while others require vigilant monitoring and timely remediation. [The] draft guidance will build on the FDA’s existing efforts to safeguard patients from cyber threats by recommending medical device manufacturers continue to monitor and address cybersecurity issues while their product is on the market.

The draft guidance recommends the implementation of a structured and systematic cybersecurity risk management program to identify and respond in a timely fashion to identified vulnerabilities which includes:

  • Application of the 2014 NIST voluntary framework for Improving Critical Infrastructure Cybersecurity;
  • Monitoring cybersecurity information sources for identification and detection of cybersecurity vulnerabilities and risk;
  • Understanding, assessing and detecting presence and impact of a vulnerability;
  • Establishing and communicating processes for vulnerability intake and handling;
  • Clearly defining essential clinical performance to develop mitigations that protect, respond and recover from the cybersecurity risk;
  • Adopting a coordinated vulnerability disclosure policy and practice; and
  • Deploying mitigations that address cybersecurity risk early and prior to exploitation.

In addition to outlining program components, the guidance also includes proposed steps device manufactures should take to report cybersecurity vulnerabilities. The FDA specified that for the bulk of cases, advance notice of actions taken by manufacturers to address cybersecurity vulnerabilities will not be required. However, the FDA would require device manufactures to provide agency notice for the small subset of cybersecurity vulnerabilities that may compromise the clinical performance of a device and present a reasonable probability of serious adverse health consequences or death. In instances where a vulnerability is quickly addressed in a way that sufficiently reduces the risk of harm to patients, the guidance specifies that the FDA does not intend to enforce urgent reporting if: there are no serious adverse events or deaths associated with the vulnerability; within 30 days of learning of the vulnerability, the manufacturer notifies users and implements changes that reduce the risk to an acceptable level; and the manufacturer is a participating member of an ISAO and reports the vulnerability, its assessment, and remediation to the ISAO.

In summarizing the FDA’s goal, Schwartz said, “The FDA is encouraging medical device manufacturers to take a proactive approach to cybersecurity management of their medical devices…[o]nly when we work collaboratively and openly in a trusted environment, will we be able to best protect patient safety and stay ahead of cybersecurity threats.”

Whether your organization is impacted by the FDA draft guidance or not, the core principles of “Identify, Protect, Detect, Respond, and Recover” should be followed by all organizations as they address cybersecurity. The draft guidance is subject to a 90 day public comment period.

Today, in a 6-3 decision, the Supreme Court of the United States held in Campbell-Ewald Co. v. Gomez that an unaccepted settlement offer or offer of judgment does not moot a plaintiff’s case. As previously discussed, the Supreme Court granted a petition for a writ of certiorari on May 18, 2015 and heard arguments in the case on October 14, 2015.

The United States Navy contracted with petitioner Campbell-Ewald Company (Campbell) to develop a multimedia recruiting campaign that included the sending of text messages to young adults, but only if those individuals had “opted in” to receipt of marketing solicitations on topics that included Navy service. Campbell’s subcontractor generated a list of cellular phone numbers for consenting users and then transmitted the Navy’s message to over 100,000 recipients, including respondent Jose Gomez, who alleges that he did not consent to receive text messages. Gomez filed a nationwide class action, alleging that Campbell violated the Telephone Consumer Protection Act (TCPA) which prohibits “using any automatic dialing system” to send a text message to a cellular telephone, absent the recipient’s prior express consent. Gomez sought treble statutory damages for a willful and knowing TCPA violation and an injunction against Campbell’s involvement in unsolicited messaging. Before the deadline for Gomez to file a motion for class certification, Campbell proposed to settle Gomez’s individual claim and filed an offer of judgment pursuant to Federal Rule of Civil Procedure 68.  This strategy, often referred to as a “pick-off” is seen in many class actions throughout the country.  Gomez did not accept the offer and allowed the Rule 68 submission to lapse on expiration of the time (14 days) specified in the Rule. Campbell then moved to dismiss the case pursuant to Rule 12(b)(1) for lack of subject-matter jurisdiction. Campbell argued first that its offer mooted Gomez’s individual claim by providing him with complete relief. Next, Campbell urged that Gomez’s failure to move for class certification before his individual claim became moot caused the putative class claims to become moot as well.

In reaching its holding, the Supreme Court found that Gomez’s complaint was not mooted by Campbell’s unaccepted offer to satisfy his individual claim. Rather, the Court stated that under basic principles of contract law, Campbell’s settlement bid and Rule 68 offer of judgment, once rejected, had no continuing effectiveness. With no settlement offer operative, the parties remained adverse; both retained the same stake in the litigation they had at the outset.

The Court looked to its prior holding in Genesis HealthCare which involved an offer of judgment to satisfy alleged damages under the Fair Labor Standards Act. Specifically, in that case, the Court assumed, without deciding, that an offer of judgment of complete relief, even if unaccepted, moots a plaintiff’s claim. In Campbell, the Court adopted Justice Kagan’s analysis as set forth in the Genesis HealthCare dissent. In dissent, Justice Kagan wrote, “When a plaintiff rejects such an offer – however good the terms – her interest in the lawsuit remains just what it was before. And so too does the court’s ability to grant her relief. An unaccepted settlement offer – like any unaccepted contract offer – is a legal nullity, with no operative effect.”

Interestingly, the Court limited its holding by specifically not deciding whether the result would be different if a defendant deposits the full amount of the plaintiff’s individual claim in an account payable to the plaintiff, and the court then enters judgment for the plaintiff in that amount. Instead, the Court reserved that question for a case in which it is not a hypothetical.

This case limits potential defense strategies when a company faces class claims, especially those under the TCPA. As such, it is imperative for organizations which utilize automated dialing systems to take steps to comply with the TCPA, its accompanying regulations, and related guidance. A great start to compliance, and to understanding the TCPA in general, would be a review of our Comprehensive TCPA FAQs.

For additional insight, including the broader implications the Campbell-Ewald decision may have for class actions, please see the excellent post from our colleagues in the Class and Collective Action group.

The European Court of Human Rights, a body of the Council of Europe, has issued a major court ruling on employee monitoring which deserves attention on this side of the pond and provides some guidance for companies with employees in Europe. Europe has generally taken a more protective stance than the U.S. when it comes to protection of individual privacy. For example, in 2014 the Court of Justice of the European Union, in Google Spain SL v. Agencia Espanola de Proteccion de Datos, held that a Spanish citizen had the “right to be forgotten” and specifically a right to de-list information on Google about his past financial troubles. The gap between European and the US approach to privacy law may be narrowing ever so slightly, however.

On January 12, 2016, the European Court of Human Rights in Strasbourg issued a decision in the case of Barbulescu v. Romania, Application No. 61496/08. Barbulescu, a citizen of Romania, worked for an un-named private company in Bucharest. In 2007, he was asked by his company to set up a Yahoo Messenger account for the purpose of responding to client inquiries, and did so. In July of 2007, the company informed him that it had been monitoring his account and that the records showed that he had been using it for personal purposes contrary to internal regulations. Barbulescu denied the personal use, but when confronted with proof, including communications with his fiancée about his “sexual health,” he claimed invasion of his privacy. His employment was terminated on August 1, 2007. Barbulescu challenged his termination in Bucharest County Court, which dismissed his complaint. From there he appealed to the Bucharest Court of Appeal, which upheld the dismissal.

Barbulescu’s case eventually found its way to the European Court of Human Rights, not on the issue of whether he was wrongfully terminated, but whether the company’s actions violated Article 8 of the 1981 Council of Europe Convention for the protection of individuals with regard to the automatic processing of personal data.

The Court, in a 6 to 1 decision, held that Article 8 applied, but was not violated in this case. It held that Barbulescu had not convincingly explained why he had used the Yahoo messenger account for personal purposes and that there was nothing to indicate that the Romanian courts failed to strike a fair balance “between the applicant’s right to respect for his private life under Article 8 and his employer’s interests.”

As often occurs in American disputes of this nature, the question of whether the employee was put on notice was critical. The Government of Romania claimed Barbulescu had been given notice that the employer could monitor his communications, but he denied it and there was no signed acknowledgment. The court noted that this gap meant there was “no straightforward answer” to the question before it, which shows that having a clear policy and signed acknowledgement of employee monitoring is always a good idea, in any country.

One judge, Judge Pinto de Albuquerque, dissented, disagreeing with the holding that the “employer’s monitoring was limited in scope and proportionate.” He noted further that:

Internet surveillance in the workplace is not at the employer’s discretionary power. In a time when technology has blurred the line between work life and private life, and some employers allow the use of company-owned equipment for employee’s personal purposes, other allow employees to use their own equipment for work-related matters and still other employers permit both, the employer’s right to maintain a compliance workplace and the employee’s obligation to complete his or her professional tasks adequately does not justify unfettered control of the employee’s expression on the Internet. Even where there exists suspicions of cyberslacking, diversion of the employer’s IT resources for personal purposes, damage to the employer’s IT systems, involvement in illicit activities, or disclosure of the employer’s trade secrets, the employer’s right to interfere with the employee’s communications is not unrestricted.

Like most cases, the decision likely turned on the particular facts, and the dissent suggests that restrictions on employee monitoring will probably still be subject to greater scrutiny in Europe than in the US (and individual countries have their own specific laws in this area).

The proposals, published in separate Federal Register Notices as Part IV and Part V of Vol. 80 No. 246, identify fives types of cybersecurity testing as essential to a sound system safeguards program:  (1) vulnerability testing, (2) penetration testing, (3) controls testing, (4) security incident response plan testing, and (5) enterprise technology risk assessments.
The two proposals would require all derivatives clearing organizations, designated contract markets, swap execution facilities, and swap data repositories to conduct each of the five types of cybersecurity testing, as frequently as indicated by appropriate risk analysis. In addition, the proposals would specify minimum testing frequency requirements for all derivatives clearing organizations and swap data repositories and specified designated contract markets, and require them to have certain tests performed by independent contractors.
As currently drafted, the proposals require the scope of all testing and assessment required by CFTC be broad enough to include all testing of automated systems and controls necessary to identify any vulnerability which, if exploited or accidentally triggered, could enable an intruder or unauthorized user or insider to:
  1. interfere with the registrant’s operations or with fulfillment of its statutory and regulatory responsibilities;
  2. impair or degrade the reliability, security, or capacity of the registrant’s automated systems;
  3. add to, delete, modify, exfiltrate, or compromise the integrity of any data related to the registrant’s regulated activities; or
  4. undertake any other unauthorized action affecting the registrant’s regulated activities or the hardware or software used in connection with those activities.
 Importantly, CFTC published a Fact Sheet summarizing the proposed rulemaking.
Issuing strong support of the proposals, CFTC Commissioner J. Christopher Giancarlo said, “The job of the Commodity Futures Trading Commission as a regulator is to encourage, support, inform and empower this continuous development so that market participants adopt fully optimized and up-to-date cyber defenses.”  Echoing sentiments we have previously expressed, Commission Giancarlo went on to acknowledge that “[g]iven the constantly morphing nature of cyber risk, the best defenses provide no guarantee of protection.”
Whether your organization is a registered entity with CFTC or not, the cybersecurity testing and system risk analysis details set forth in the proposals provide valuable insight into how your organization may take steps to protect itself from a cyber-attack.  The proposals are subject to a 60 day public comment period which will end on February 22, 2016.

Earlier this month, the Federal Trade Commission (“FTC”) issued a report discussing “big data.” The report compiles the agency’s learning from recent seminars and research, including a public workshop held on September 15, 2014. Known best for its role as the federal government’s consumer protection watchdog, the FTC highlights in the report a number of concerns about uses of big data and the potential harms they may have on consumers. However, while the report’s focus is on the commercial use of big data involving consumer data, it also describes a number of issues raised when big data is employed in the workplace.

Used in the human resources context, big data has many useful applications such as helping companies to better select and manage applicants and employees. The FTC’s report describes a study which shows that “people who fill out online job applications using browsers that did not come with the computer . . . but had to be deliberately installed (like Firefox or Google’s Chrome) perform better and change jobs less often.” Applying this correlation in the hiring process can result in the employer rejecting candidates not because of factors that are job-related, but because they use a particular browser. Whether this would produce the best results for the company is unclear.

Likely spurred at least in part by comments made by EEOC counsel at the FTC’s big data workshop in September 2014, the FTC’s report summarizes the potential ways that using “big data” tools can violate existing employment laws, such as Title VII of the Civil Rights Act of 1964, the Age Discrimination in Employment Act, the American with Disabilities Act and the Genetic Information Nondiscrimination Act. The report also includes a brief discussion of “disparate treatment” or “disparate impact” theories, concepts familiar to many employers.

According to the report, facially neutral policies or practices that have a disproportionate adverse effect or impact on a protected class create a disparate impact, unless those practices or policies further a legitimate business need that cannot reasonably be achieved by means that are less disparate in their impact. Consider the application above. Use of a particular browser seems to be facially neutral, but some might argue that selection results based on that correlation can have a disparate impact on certain protected classes. Of course, as the FTC report notes with regard to other uses of big data – a fact-specific analysis will be necessary to determine whether a practice causes a disparate impact that violates law.

Two other concerns discussed in the FTC’s report that have workplace implications include:

  • Biases in the underlying data. Big data is about the collection, compilation and analysis of massive amounts of data. If hidden biases exist in these stages of the process, “then some statistical relationships revealed by that data could perpetuate those biases.” Yes, this means “garbage in, garbage out.” The report provides a helpful example: a company’s big data algorithm only considers applicants from “top tier” colleges to help them make hiring decisions. That company may be incorporating previous biases in college admission decisions. Thus, it is critical to understand existing biases in data as they could undermine the usefulness of the end results.
  • Unexpectedly learning sensitive information. Employers using big data can inadvertently come into possession of sensitive personal information. The report describes a study which combined data on Facebook “Likes” and limited survey information to determine that researchers could accurately predict a male user’s sexual orientation 88 percent of the time, a user’s ethnic origin 95 percent of time, and whether a user was Christian or Muslim 82 percent of the time. Clearly, exposure to this information could expose an employer to claims that its hiring decisions were based on this information, and not other legitimate factors.

Companies can maximize the benefits and minimize the risks of big data, according to the FTC report, by asking the following questions:

  • How representative is your data set?
  • Does your data model account for biases?
  • How accurate are your predictions based on big data?
  • Does your reliance on big data raise ethical or fairness concerns?

There certainly is much to consider before using big data technology in the workplace, or for commercial purposes. As big data applications become more widespread and cost efficient, employers may feel the need to use it to remain competitive. They will need to proceed cautiously, however, and understand the technology, the data collected and whether the correlations work and work ethically.

 

 

You’ve spent extensive time and effort, not to mention money, establishing your company’s reputation only to have the company defamed or disparaged anonymously online. This is a scenario which many organizations face in today’s virtual marketplace. As a recent decision by the Delaware Superior Court illustrates, dealing with these types of issues is often difficult and complicated, especially from a legal perspective.

Late last year, the Delaware Superior Court denied SunEnergy1’s efforts to identify an anonymous poster who allegedly made defamatory comments about SunEnergy1’s business on the website Glassdoor.com. Specifically, SunEnergy1 and two individuals (Plaintiffs) filed a defamation lawsuit in North Carolina Business Court against a former employee and Chief Financial Officer, Jeffery Brown (Defendant). The suit was filed in February 2014 after statements, allegedly made by Brown, were posted on Glassdoor.com on December 15, 2013. The anonymous posting was titled, “This is a terrible place to work” and made a number of unflattering statements about the work environment at SunEnergy1, including labeling the company’s culture as one of “oppression, untruths, and bullying.”

Glassdoor.com is one of several websites where job-seekers can post resumes and employers can advertise career openings. Glassdoor.com describes itself, and as its name implies, as a “free jobs and career community that offers the world an inside look at jobs and companies.” Glassdoor.com also utilizing a rating system which is based on user input and offers a form where users can post reviews about companies listed.

After filing suit, Plaintiffs served an out-of-state subpoena on Glassdoor in Delaware and ultimately filed a Delaware action to compel Glassdoor to identify the anonymous poster’s Internet Protocol (IP) address. In response, Glassdoor filed a motion to quash, arguing the Plaintiffs’ subpoena was overbroad, unduly burdensome, and infringed upon the anonymous user’s First Amendment right to freedom of speech.

The Delaware Commissioner found that, although the subpoena arose from a North Carolina lawsuit, Delaware’s standard for overcoming online anonymity is the correct source of law because the information was being sought from a Delaware company. The Court stated the right to discover the identity of an anonymous author alleged to have made defamatory statements must be balanced against the author’s First Amendment right to free speech and to remain anonymous. In the precedential case Doe v. Cahill, the Delaware Supreme Court held that “[courts] must adopt a standard that appropriately balances one person’s right to speak anonymously against another person’s right to protect his reputation.” In short, under Delaware law, a party seeking to identify an anonymous speaker must make a showing that a civil wrong has been committed.

Applying Delaware law, the Court stated it needed to decide whether any “reasonable person” could have interpreted the statements in the December 15, 2013 review “as being anything other than an opinion”? In making its determination, the Court looked closely at the nature of Glassdoor.com and found it is a website for employment and company evaluation and not a news website where there is an expectation of objective reporting and journalistic standards. Similarly, the Court stated it is not a website where a person would go to find detailed factual information about a company such as earnings reports and SEC filings. Rather, the Court found it “quite evident” that Glassdoor.com is a website where people go to express their personal opinions having worked for a company—not a website where a reasonable person would go looking for objective facts and information about a company. The Court went on to say that it was “readily apparent that the author of this review is unhappy about his or her time at SunEnergy1 and has the proverbial axe to grind—no reasonable person would think otherwise. The fact that the author is a ‘former employee’ who wished to remain anonymous only cements this conclusion.”

In denying Plaintiffs’ motion to compel identification of the anonymous user, and granting Glassdoor’s motion to quash, the Court found that even when viewed in the light most favorable to Plaintiffs, the content of the review was not defamation and was instead nothing more than a rant by a former employee, citing anecdotal evidence, about why he or she thinks it is a terrible place to work.

Unfortunately for employers, or organizations which are similary disparaged, the Court did not consider the potential harm anonymous posts like those at issue here could have on the organization’s reputation. In fact, “opinions” or insights from former employees are exactly why many users frequent such sites.

As the year draws to a close, employer claims under the Computer Fraud and Abuse Act (“CFAA”) against departing employees for stealing or otherwise diverting employer information without authorization to do so are dying slow deaths in many federal courts across the nation. As noted over on the Non-Compete and Trade Secrets Report, the U.S. federal circuits are split regarding whether an employee acts “without authorization” under CFAA when he or she steals employer confidential data at or near termination. The Second, Ninth and Fourth Circuits hold that as long as the employee was permitted to be on a computer for any purpose, diversion of employer information is “authorized” under CFAA. In contrast, the First, Fifth, Seventh, and Eleventh Circuits have adopted a broad construction, allowing CFAA claims alleging an employee misused employer information that he or she was otherwise permitted to access.

Now, in North Carolina at least, employers may have better luck under fighting malevolent employees under the North Carolina statutory corollary to CFAA. In Sprirax Sarco, Inc. v. SSI Eng’g, the Eastern District of North Carolina put teeth into the North Carolina Computer Trespass Act (“NC Computer Trespass Act”) giving employers a new weapon in the fight against trade secret and confidential information misappropriation by departing employees. The NC Computer Trespass act, N.C. Gen. Stat. § 14-458, provides, in relevant part:

(a) . . . [I]t shall be unlawful for any person to use a computer or computer network without authority and with the intent to do any of the following:

(1) Temporarily or permanently remove, halt, or otherwise disable any computer data, computer programs, or computer software from a computer or a computer network. . . .

(3) Alter or erase any computer data, computer program or computer software. . . . [or]

(5) Make or cause to be made an unauthorized copy, in any form, including, but not limited to, any printed or electronic form of computer data, computer programs, or computer software residing in, communicated by, or produced by computer or computer network.

Unlike the CFAA, the NC Computer Trespass Act defines “without authority” clearly. An employee acts “without authority” when either the employee has no right or permission to use a computer, or the employee uses a computer in a manner exceeding the right or permission given by the employer. The United States District Court for the Eastern District held that a departing employee who intentionally used his employer-issued laptop to download vast quantities of computer files to his own media devices and Dropbox account, was acting “without authorization” under the NC Computer Trespass Act. The Court also noted that the former employee also deleted vast quantities of computer files from the Spirax-issued laptop “without authorization” to so.

Spirax provides employers with employees in North Carolina a new tool for protecting corporate information access without the need to tread into the murky waters of the CFAA.

Can we prohibit employees from making audio recordings at work?  As advancements in technology continue to increase, and it becomes easier and easier for employees to surreptitiously record conversations, this inquiry is posed by many employers.  In fact, we discussed this very question back in 2013.  Unfortunately, the answer to this question is perhaps the most often used attorney response  – “Maybe.”  This is especially true given the recent decision from the National Labor Relation Board (NLRB) in Whole Foods Market, Inc. and United Food and Commercial Workers, Local 919 and Workers Organizing Committee of Chicago.  For employers, or those looking to prohibit the use of recording devices, the NLRB’s decision, issued on December 24, 2015, is more akin to coal than an early Christmas present. 

This matter was before the NLRB after the NLRB’s General Counsel filed exceptions to the decision of Administrative Law Judge Steven Davis.  That decision, issued on October 30, 2013, was previously discussed by our labor colleagues.  In his decision, ALJ Davis found that the company’s nationwide policy banning employee recording of workplace “conversations” was lawful.  The policy’s stated purpose was “to eliminate a chilling effect… when one person is concerned that his or her conversation with another is being secretly recorded.”  The prohibition otherwise complements the company’s well-established and pro-active open-door policy.  The ALJ found the company has a legitimate business interest in promoting a culture encouraging employees to “speak up and speak out.”

In his exceptions to the ALJ’s decision, the NLRB’s General Counsel asserted that recording conversations in the workplace is a protected right and that employees would reasonably interpret the rules to prohibit their use of cameras or recording devices in the workplace for employees’ mutual aid and protection.

The NLRB found, contrary to the ALJ, that the rules at issue would reasonably be construed by employees to prohibit Section 7 activity.  The NLRB went on to say that photography and audio or video recording in the workplace are protected by Section 7 if employees are acting in concert for their mutual aid and protection and no overriding employer interest is present.  Specifically, the NLRB stated that such protected conduct may include, for example, recording images of protected picketing, documenting unsafe workplace equipment or hazardous working conditions, documenting or publicizing discussions about terms and conditions of employment, documenting inconsistent application of employer rules, or recording evidence to preserve it for later use in administrative or judicial forums in employment-related actions.

Importantly, the decision does state that the NLRB is not making any findings as to whether particular recordings are concerted and is also not finding that recording necessarily constitutes concerted activity.  Similarly, the NLRB stated they are not holding that all rules regulating recording are invalid.  Rather, the NLRB clarified they only found that recording may, under certain circumstances, constitute protected concerted activity under Section 7 (the dreaded “Maybe”) and that the rules at issue in this matter would reasonably be read by employees to prohibit protected concerted recording violate the National Labor Relations Act.

While mentioned in a footnote to the decision, it is important to note that some states (generally in statutes addressing wiretapping) require all parties to a conversation to consent before that conversation may be recorded.  To overcome these statutory prohibitions on surreptitious recording, the NLRB focused on the broad application of these recording rules to all jurisdictions where the Respondent has locations.  It is unclear whether the NLRB’s decision would have been different if the rules were limited to those states where nonconsensual recording is unlawful.

This decision, along with others by the NLRB and state and federal courts, highlights the difficulties employers face when attempting to prohibit recording or the use of recording devices.  As such, employers interested in implementing workplace rules or policies regarding recording are urged to consider existing legal precedent on this issue, set forth specific legitimate business interests for the prohibition, and consult with counsel before development and implementation.

Earlier this year, we reported that the Internal Revenue Service clarified that it would not consider the value of credit monitoring and other identity protection services provided by employers to employees in connection with a data breach to be taxable income to the employees. IRS Announcement 2015-22. In response to comments, the IRS expanded this tax treatment to apply when employers provide such services before a breach happens. IRS Announcement 2016-02.

In the more recent Announcement, the IRS concludes:

Accordingly, the IRS will not assert that an individual must include in gross income the value of identity protection services provided by the individual’s employer or by another organization to which the individual provided personal information (for example, name, social security number, or banking or credit account numbers). Additionally, the IRS will not assert that an employer providing identity protection services to its employees must include the value of the identity protection services in the employees’ gross income and wages. The IRS also will not assert that these amounts must be reported on an information return (such as Form W-2 or Form 1099-MISC) filed with respect to such individuals. Any further guidance on the taxability of these benefits will be applied prospectively

This is welcomed news for employers looking for ways to help their employees avoid being affected by a data breach, and mitigating the effects should employees become victims of a breach. The employer can provide the services without increasing its federal payroll taxes and employees can receive the services without incurring any additional federal tax liability. Employers and employees will still have to consider any potential state and local tax implications, and should confer with their tax advisors accordingly.

The Announcement states, however, that it does not apply to cash received in lieu of identity protection services, or to proceeds received under an identity theft insurance policy. Thus, for example, the tax treatment of proceeds received under an identity theft insurance policy continues to be governed by existing law.

As a result of this action, and because of how prevalent data breaches have become, it is likely that more employers will be looking to provide data breach monitoring and related services to their employees. While these services would not constitute benefits covered under the Employee Retirement Income Security Act (ERISA), as with other employee benefits, employers will want to carefully select the vendors that will provide the services, and take other steps to incorporate this into their overall benefit offerings.