New Safe Harbor Framework!

US.EUCompliance and privacy officials all over the U.S. just let out a breath they had been holding since last October when the European Court of Justice invalidated the US/EU Safe Harbor Program. BNA is reporting that negotiators just reached an agreement on a new data transfer framework between the U.S. and the European Union. Details are forthcoming and we will report on them here as we learn more about this development.

Safe Harbor Resolution…Not So Fast

UPDATE:  Although we previously reported that a possible Safe Harbor resolution may be imminent, Bloomberg BNA is now reporting that a European Commission official has told them there may be no deal today to replace the U.S.-EU Safe Harbor Program.

According to BNA, when European Commissioner for Justice, Consumers and Gender Equality Justice Vera Jourova goes before the European Parliament later today, she will only provide a status update of the negotiations as opposed to announcing a resolution of this issue.  Without an announcement resolving this matter today, it is possible the Art. 29 Party (made up of data protection officials from the 28 EU member states) may decide during their scheduling meeting tomorrow that individual Data Protection Authorities will start enforcement actions against companies over data transfers which are still based on the invalidated Safe Harbor Program.

We will continue to update this situation as it unfolds.

Making Sausage: The Senate and the House Must Reconcile Judicial Redress Legislation with Safe Harbor Negotiations On-Going.

The folks over at Politico are reporting that the Senate Judiciary Committee struck a deal Wednesday night regarding the Judicial Redress Act. The committee adopted Senator John Cornyn’s amendment that ties the bill’s privacy protections to the proposed new Safe Harbor Agreement being negotiated between the U.S. and the EU. The Judicial Redress Bill attempts to strike a balance between providing EU citizens a judicial forum in which to bring privacy related claims and the need for the U.S. to protect national security. The Senate and House now have to agree on a bill to send to the President. Thousands of U.S. companies are watching with interest in the hopes that a new Safe Harbor agreement can be reached to avoid a last-minute scramble to put model data transfer agreements or binding corporate rules in place to allow the free flow of data across borders.

New U.S.-EU Safe Harbor Imminent?

Bloomberg BNA is reporting that the EU hopes to reach a Safe Harbor deal with the U.S. on Monday, February 1, 2016.  Speaking at the Computers, Privacy and Data Protection Conference in Brussels, Paul F. Nemitz, Director for Fundamental Rights and Union Citizenship at the Directorate-General Justice of the European Commission said, “[w]e hope to be able to reach an [acceptable] arrangement.”   Mr. Nemitz is considered one of the top European Commission officials negotiating with the U.S. on reaching a successor treaty to the U.S.-EU Safe Harbor data transfer program.

As previously reported, on October 6, 2015, the Court of Justice of the European Union overturned the Safe Harbor program when it ruled in Schrems v. Data Protection Commissioner  that the voluntary Safe Harbor Program did not provide adequate protection to the personal data of EU citizens. Post Schrems U.S. companies have been unclear what to do to transfer data out of the EU in a compliant manner.

Mr. Nemitz said the European Commissioner for Justice, Consumers and Gender Equality Justice, Vera Jourova, will go to parliament Monday evening to “inform member states then of the outcome” of talks to reach a resolution on a possible replacement for the Safe Harbor.  A January 31, 2016 deadline has been set by the Article 29 Working Party of data protection officials from the 28 EU member states.  The hope for agreement by February 1, 2016 is pertinent as the Art. 29 Party is scheduled to meet February 2, 2016 to discuss this issue.

Interestingly, U.S. Federal Trade Commissioner Julie Brill also appeared with Mr. Nemitz at the conference.  While Ms. Brill confirmed “[t]here’s absolutely a path to agreement,” she was less committal as to a potential Monday resolution saying, “[w]e need to get there. We can’t allow this to continue to be a stumbling block. But I don’t have a crystal ball.”

We will continue to monitor this situation and provide updates as we obtain them.

Top 10 for 2016 – Happy Data Privacy Day

In honor of Data Privacy Day, we provide the following “Top 10 for 2016.”  While the list is by no means exhaustive, it does provide some hot topics for organizations to consider in 2016.

  1. EU/U.S. Data Transfer (status of Safe Harbor).  On October 6, 2015, the Court of Justice of the European Union (CJEU) ruled in Schrems v. Data Protection Commissioner (Case C-362/14) that the voluntary Safe Harbor Program did not provide adequate protection to the personal data of EU citizens. The Safe Harbor Program was used extensively by organizations that needed to transfer data from the EU to the U.S. Post Schrems U.S. companies have been unclear what to do to transfer data out of the EU in a compliant manner. The ultimate resolution of this issue is one of the most anticipated privacy topics for 2016.
  2. People Analytics including Employee Tracking/Wearables.  The Federal Trade Commission’s January 2016 report discussing “big data” raised a number of issues for organizations concerning the use of data analytics with respect to both consumer data, as well as the application of big data tools in the workplace. People analytics refers generally to a data-driven approach to managing an organization’s human capital, and it is likely to be a significant trend for employers in the months and years ahead. Some of the data to perform the analytics is collected through the devices employees use and wear. For example, as GPS and RFID enabled devices become more prevalent, employers are faced with the difficulty of balancing the workplace risks against the ability to obtain information about employees’ whereabouts which can substantially increase productivity. Similarly, wellness programs seek to incentivize employees (including the members of their household) to live “healthier” lives. Wearable technologies such as FitBit allow for the collection of data which when analyzed can have substantial benefits and help control healthcare costs, but they can also raise privacy and discrimination risks.
  3. Risk Assessment/Written Information Security Program. Many businesses remain unaware of how much personal and confidential information they maintain, who has access to it, how it is used and disclosed, how it is safeguarded, and so on. Getting a handle on a business’ critical information assets must be the first step, and is perhaps the most important step to tackling information risk. It is logically impossible to adequately safeguard something you are not aware exists. In fact, failing to conduct a risk assessment may subject the business to penalties under federal and/or state law. Even if adopting a written information security program (WISP) to protect personal information is not an express statutory or regulatory mandate in your state (as it is in states such as CA, CT, FL, MA, MD, OR, etc.), having one is critical to addressing information risk. Importantly, an organization’s WISP should also address company data outside of the company’s control, such as data or information which is provided to vendors who provide services to an organization. Not only will a WISP better position a company when defending claims related to a data breach, it will also help the company manage and safeguard critical information and potentially avoid a breach from occurring in the first place.
  4. The Telephone Consumer Protection Act (TCPA).  According to statistics compiled by WebRecon LLC, 3,710 TCPA lawsuits were filed in 2015, representing an increase of 45% over 2014. Demonstrating consistency, 2015 marked the 8th year in a row where the number of TCPA suits increased from the preceding year. Tellingly, 23.6% of those suits (877) were filed as putative class actions. With the recent SCOTUS decision in Campbell-Ewald making defense of class actions under the TCPA more difficult, we expect the number of TCPA suits to continue to grow in 2016. Many of these suits are not just aimed at large companies.  Instead, these suits are often focused on small businesses that may unknowingly violate the TCPA.  With statutory damages ranging from $500 to $1500 per violation (e.g. per fax/text sent or call made) these suits often result in potential damages in the hundreds of thousands, if not millions, of dollars.  Understanding the FAQs for the TCPA and taking steps to comply with the TCPA is a great first step as we enter 2016.
  5. Industry Specific Guidance.  Whether it is the U.S. Food and Drug Administration (FDA) or the U.S. Commodity Futures Trading Commission (CFTC), organizations will need to remain vigilant in 2016 to ensure they are addressing industry specific rules or guidance regarding cybersecurity and the safeguarding of the information they maintain.
  6. BYOD/COPE.  Many organizations have adopted policies allowing employees to utilize their own electronic devices in the workplace, and are turning to Bring Your Own Device (“BYOD”) programs but without considering all of the risks and related issues. Some are sticking with Corporate Owned Personally Enabled (“COPE”) programs.  If you are considering BYOD, you should review our comprehensive BYOD issues outline and determine whether BYOD or COPE is the best option for your organization.
  7. Investigating Social Media.  The use of social media continues to grow on a global scale, and the content available on a user’s profile or account is often being sought in connection with litigation and/or employment decisions. While public content may generally be viewed without issue, employers need to be aware of how they are accessing social media content. This is especially true as the list of states protecting legislation to protect social media privacy continues to grow. In a litigation context, if private content is accessed improperly, serious repercussions can follow.
  8. Federal Trade Commission (FTC) & Federal Communications Commission’s (FCC) Enforcement Re: Data Security.  Both the FTC and FCC continued enforcements actions in 2015 in connection with companies’ alleged failure to properly safeguard data. FCC actions resulted in consent decrees which included penalties in the hundreds of thousands of dollars, and mirrored previous consent decrees entered into by the FTC. However, 2015 decisions in cases stemming from the FTC’s actions found the FTC may have difficulty meeting its burden of proving that a company’s alleged unreasonable data security practices caused substantial consumer injury or that any consumer whose personal information was maintained by a company suffered any harm as a result of such alleged conduct. For 2016 it remains to be seen just how far the FCC and FTC will go to continue enforcement actions related to data security. Nevertheless, organizations still need to be conscious of the statements or promises they make concerning their data security practices and implement appropriate safeguards to protect the personal information they maintain.
  9. HIPAA Compliance. The Office for Civil Rights (OCR) stated that in early 2016 it will launch Phase 2 of its audit program measuring compliance with HIPAA’s privacy, security and breach notification requirements by covered entities and business associates. We previously discussed, having the right documents in place can go a long way toward helping an organization survive an OCR HIPAA audit. Now that it appears these audits are coming, it is important that covered entities and business associates invest the time in identifying and closing any HIPAA compliance gaps before an OCR investigator does this for them. This is particularly true as some of the largest HIPAA settlements to date are less about harm, and more focused on compliance.
  10. Develop a Plan for Breach Notification. All state and federal data breach notification requirements currently in effect require notice be provided as soon as possible (with some setting forth specific time periods). Failing to respond appropriately could result in significant liability.  Employers need to be conscious of data breach issues as the leading cause of breaches is employee error. Developing a breach response plan is not only prudent but also may be required under federal or state law.  A proactive approach is often the simplest and cheapest way to avoid liability.

Be Vigilant and Watch for New Legislation. Managing data and ensuring its privacy, security and integrity is critical for businesses and individuals, and is increasingly becoming the subject of broad, complex regulation. As such, companies are left to navigate the constantly evolving web of growing state legislation and/or industry guidance. Organizations therefore need to be vigilant in order to remain compliant and competitive in this regard.

FDA Issues Draft Cybersecurity Guidance for Device Manufacturers

Last week, the U.S. Food and Drug Administration (FDA) issued draft guidance outlining important steps medical device manufacturers should take to address cybersecurity risks to keep patients safe and better protect the public health. The draft guidance, which details the agency’s recommendations for monitoring, identifying, and addressing cybersecurity vulnerabilities in medical devices after they have entered the market, is part of the FDA’s ongoing efforts to ensure the safety and effectiveness of medical devices in the face of potential cyber threats.

The FDA has identified cybersecurity threats to medical devices as a growing concern. While manufacturers can incorporate controls in the design of a product to help prevent these risks, it is essential that manufacturers also consider improvements during maintenance of devices, as the evolving nature of cyber threats means risks may arise throughout a device’s entire lifecycle.

Commenting on the guidance, Suzanne Schwartz, M.D., M.B.A., Associate Director for Science and Strategic Partnerships and Acting Director of Emergency Preparedness/Operations and Medical Countermeasures in the FDA’s Center for Devices and Radiological Health said,

All medical devices that use software and are connected to hospital and health care organizations’ networks have vulnerabilities—some we can proactively protect against, while others require vigilant monitoring and timely remediation. [The] draft guidance will build on the FDA’s existing efforts to safeguard patients from cyber threats by recommending medical device manufacturers continue to monitor and address cybersecurity issues while their product is on the market.

The draft guidance recommends the implementation of a structured and systematic cybersecurity risk management program to identify and respond in a timely fashion to identified vulnerabilities which includes:

  • Application of the 2014 NIST voluntary framework for Improving Critical Infrastructure Cybersecurity;
  • Monitoring cybersecurity information sources for identification and detection of cybersecurity vulnerabilities and risk;
  • Understanding, assessing and detecting presence and impact of a vulnerability;
  • Establishing and communicating processes for vulnerability intake and handling;
  • Clearly defining essential clinical performance to develop mitigations that protect, respond and recover from the cybersecurity risk;
  • Adopting a coordinated vulnerability disclosure policy and practice; and
  • Deploying mitigations that address cybersecurity risk early and prior to exploitation.

In addition to outlining program components, the guidance also includes proposed steps device manufactures should take to report cybersecurity vulnerabilities. The FDA specified that for the bulk of cases, advance notice of actions taken by manufacturers to address cybersecurity vulnerabilities will not be required. However, the FDA would require device manufactures to provide agency notice for the small subset of cybersecurity vulnerabilities that may compromise the clinical performance of a device and present a reasonable probability of serious adverse health consequences or death. In instances where a vulnerability is quickly addressed in a way that sufficiently reduces the risk of harm to patients, the guidance specifies that the FDA does not intend to enforce urgent reporting if: there are no serious adverse events or deaths associated with the vulnerability; within 30 days of learning of the vulnerability, the manufacturer notifies users and implements changes that reduce the risk to an acceptable level; and the manufacturer is a participating member of an ISAO and reports the vulnerability, its assessment, and remediation to the ISAO.

In summarizing the FDA’s goal, Schwartz said, “The FDA is encouraging medical device manufacturers to take a proactive approach to cybersecurity management of their medical devices…[o]nly when we work collaboratively and openly in a trusted environment, will we be able to best protect patient safety and stay ahead of cybersecurity threats.”

Whether your organization is impacted by the FDA draft guidance or not, the core principles of “Identify, Protect, Detect, Respond, and Recover” should be followed by all organizations as they address cybersecurity. The draft guidance is subject to a 90 day public comment period.

SCOTUS: Offer of Judgment Does Not Moot TCPA Case

Today, in a 6-3 decision, the Supreme Court of the United States held in Campbell-Ewald Co. v. Gomez that an unaccepted settlement offer or offer of judgment does not moot a plaintiff’s case. As previously discussed, the Supreme Court granted a petition for a writ of certiorari on May 18, 2015 and heard arguments in the case on October 14, 2015.

The United States Navy contracted with petitioner Campbell-Ewald Company (Campbell) to develop a multimedia recruiting campaign that included the sending of text messages to young adults, but only if those individuals had “opted in” to receipt of marketing solicitations on topics that included Navy service. Campbell’s subcontractor generated a list of cellular phone numbers for consenting users and then transmitted the Navy’s message to over 100,000 recipients, including respondent Jose Gomez, who alleges that he did not consent to receive text messages. Gomez filed a nationwide class action, alleging that Campbell violated the Telephone Consumer Protection Act (TCPA) which prohibits “using any automatic dialing system” to send a text message to a cellular telephone, absent the recipient’s prior express consent. Gomez sought treble statutory damages for a willful and knowing TCPA violation and an injunction against Campbell’s involvement in unsolicited messaging. Before the deadline for Gomez to file a motion for class certification, Campbell proposed to settle Gomez’s individual claim and filed an offer of judgment pursuant to Federal Rule of Civil Procedure 68.  This strategy, often referred to as a “pick-off” is seen in many class actions throughout the country.  Gomez did not accept the offer and allowed the Rule 68 submission to lapse on expiration of the time (14 days) specified in the Rule. Campbell then moved to dismiss the case pursuant to Rule 12(b)(1) for lack of subject-matter jurisdiction. Campbell argued first that its offer mooted Gomez’s individual claim by providing him with complete relief. Next, Campbell urged that Gomez’s failure to move for class certification before his individual claim became moot caused the putative class claims to become moot as well.

In reaching its holding, the Supreme Court found that Gomez’s complaint was not mooted by Campbell’s unaccepted offer to satisfy his individual claim. Rather, the Court stated that under basic principles of contract law, Campbell’s settlement bid and Rule 68 offer of judgment, once rejected, had no continuing effectiveness. With no settlement offer operative, the parties remained adverse; both retained the same stake in the litigation they had at the outset.

The Court looked to its prior holding in Genesis HealthCare which involved an offer of judgment to satisfy alleged damages under the Fair Labor Standards Act. Specifically, in that case, the Court assumed, without deciding, that an offer of judgment of complete relief, even if unaccepted, moots a plaintiff’s claim. In Campbell, the Court adopted Justice Kagan’s analysis as set forth in the Genesis HealthCare dissent. In dissent, Justice Kagan wrote, “When a plaintiff rejects such an offer – however good the terms – her interest in the lawsuit remains just what it was before. And so too does the court’s ability to grant her relief. An unaccepted settlement offer – like any unaccepted contract offer – is a legal nullity, with no operative effect.”

Interestingly, the Court limited its holding by specifically not deciding whether the result would be different if a defendant deposits the full amount of the plaintiff’s individual claim in an account payable to the plaintiff, and the court then enters judgment for the plaintiff in that amount. Instead, the Court reserved that question for a case in which it is not a hypothetical.

This case limits potential defense strategies when a company faces class claims, especially those under the TCPA. As such, it is imperative for organizations which utilize automated dialing systems to take steps to comply with the TCPA, its accompanying regulations, and related guidance. A great start to compliance, and to understanding the TCPA in general, would be a review of our Comprehensive TCPA FAQs.

For additional insight, including the broader implications the Campbell-Ewald decision may have for class actions, please see the excellent post from our colleagues in the Class and Collective Action group.

Council of Europe Issues Major Ruling on Employee Monitoring

The European Court of Human Rights, a body of the Council of Europe, has issued a major court ruling on employee monitoring which deserves attention on this side of the pond and provides some guidance for companies with employees in Europe. Europe has generally taken a more protective stance than the U.S. when it comes to protection of individual privacy. For example, in 2014 the Court of Justice of the European Union, in Google Spain SL v. Agencia Espanola de Proteccion de Datos, held that a Spanish citizen had the “right to be forgotten” and specifically a right to de-list information on Google about his past financial troubles. The gap between European and the US approach to privacy law may be narrowing ever so slightly, however.

On January 12, 2016, the European Court of Human Rights in Strasbourg issued a decision in the case of Barbulescu v. Romania, Application No. 61496/08. Barbulescu, a citizen of Romania, worked for an un-named private company in Bucharest. In 2007, he was asked by his company to set up a Yahoo Messenger account for the purpose of responding to client inquiries, and did so. In July of 2007, the company informed him that it had been monitoring his account and that the records showed that he had been using it for personal purposes contrary to internal regulations. Barbulescu denied the personal use, but when confronted with proof, including communications with his fiancée about his “sexual health,” he claimed invasion of his privacy. His employment was terminated on August 1, 2007. Barbulescu challenged his termination in Bucharest County Court, which dismissed his complaint. From there he appealed to the Bucharest Court of Appeal, which upheld the dismissal.

Barbulescu’s case eventually found its way to the European Court of Human Rights, not on the issue of whether he was wrongfully terminated, but whether the company’s actions violated Article 8 of the 1981 Council of Europe Convention for the protection of individuals with regard to the automatic processing of personal data.

The Court, in a 6 to 1 decision, held that Article 8 applied, but was not violated in this case. It held that Barbulescu had not convincingly explained why he had used the Yahoo messenger account for personal purposes and that there was nothing to indicate that the Romanian courts failed to strike a fair balance “between the applicant’s right to respect for his private life under Article 8 and his employer’s interests.”

As often occurs in American disputes of this nature, the question of whether the employee was put on notice was critical. The Government of Romania claimed Barbulescu had been given notice that the employer could monitor his communications, but he denied it and there was no signed acknowledgment. The court noted that this gap meant there was “no straightforward answer” to the question before it, which shows that having a clear policy and signed acknowledgement of employee monitoring is always a good idea, in any country.

One judge, Judge Pinto de Albuquerque, dissented, disagreeing with the holding that the “employer’s monitoring was limited in scope and proportionate.” He noted further that:

Internet surveillance in the workplace is not at the employer’s discretionary power. In a time when technology has blurred the line between work life and private life, and some employers allow the use of company-owned equipment for employee’s personal purposes, other allow employees to use their own equipment for work-related matters and still other employers permit both, the employer’s right to maintain a compliance workplace and the employee’s obligation to complete his or her professional tasks adequately does not justify unfettered control of the employee’s expression on the Internet. Even where there exists suspicions of cyberslacking, diversion of the employer’s IT resources for personal purposes, damage to the employer’s IT systems, involvement in illicit activities, or disclosure of the employer’s trade secrets, the employer’s right to interfere with the employee’s communications is not unrestricted.

Like most cases, the decision likely turned on the particular facts, and the dissent suggests that restrictions on employee monitoring will probably still be subject to greater scrutiny in Europe than in the US (and individual countries have their own specific laws in this area).

CFTC Approves Proposed Cybersecurity Regulations

Recognizing cyber security as one of the most important issues facing financial markets today, and identifying cyber-attacks as a top threat, the U.S. Commodity Futures Trading Commission (CFTC) unanimously approved proposed enhanced rules on cybersecurity for derivatives clearing house organizations, trading platforms, and swap data repositories. 

The proposals, published in separate Federal Register Notices as Part IV and Part V of Vol. 80 No. 246, identify fives types of cybersecurity testing as essential to a sound system safeguards program:  (1) vulnerability testing, (2) penetration testing, (3) controls testing, (4) security incident response plan testing, and (5) enterprise technology risk assessments.

The two proposals would require all derivatives clearing organizations, designated contract markets, swap execution facilities, and swap data repositories to conduct each of the five types of cybersecurity testing, as frequently as indicated by appropriate risk analysis. In addition, the proposals would specify minimum testing frequency requirements for all derivatives clearing organizations and swap data repositories and specified designated contract markets, and require them to have certain tests performed by independent contractors.

As currently drafted, the proposals require the scope of all testing and assessment required by CFTC be broad enough to include all testing of automated systems and controls necessary to identify any vulnerability which, if exploited or accidentally triggered, could enable an intruder or unauthorized user or insider to:

  1. interfere with the registrant’s operations or with fulfillment of its statutory and regulatory responsibilities;
  2. impair or degrade the reliability, security, or capacity of the registrant’s automated systems;
  3. add to, delete, modify, exfiltrate, or compromise the integrity of any data related to the registrant’s regulated activities; or
  4. undertake any other unauthorized action affecting the registrant’s regulated activities or the hardware or software used in connection with those activities.

 Importantly, CFTC published a Fact Sheet summarizing the proposed rulemaking.

Issuing strong support of the proposals, CFTC Commissioner J. Christopher Giancarlo said, “The job of the Commodity Futures Trading Commission as a regulator is to encourage, support, inform and empower this continuous development so that market participants adopt fully optimized and up-to-date cyber defenses.”  Echoing sentiments we have previously expressed, Commission Giancarlo went on to acknowledge that “[g]iven the constantly morphing nature of cyber risk, the best defenses provide no guarantee of protection.”

Whether your organization is a registered entity with CFTC or not, the cybersecurity testing and system risk analysis details set forth in the proposals provide valuable insight into how your organization may take steps to protect itself from a cyber-attack.  The proposals are subject to a 60 day public comment period which will end on February 22, 2016.

FTC’s Big Data Report Has Suggestions for the Workplace

Earlier this month, the Federal Trade Commission (“FTC”) issued a report discussing “big data.” The report compiles the agency’s learning from recent seminars and research, including a public workshop held on September 15, 2014. Known best for its role as the federal government’s consumer protection watchdog, the FTC highlights in the report a number of concerns about uses of big data and the potential harms they may have on consumers. However, while the report’s focus is on the commercial use of big data involving consumer data, it also describes a number of issues raised when big data is employed in the workplace.

Used in the human resources context, big data has many useful applications such as helping companies to better select and manage applicants and employees. The FTC’s report describes a study which shows that “people who fill out online job applications using browsers that did not come with the computer . . . but had to be deliberately installed (like Firefox or Google’s Chrome) perform better and change jobs less often.” Applying this correlation in the hiring process can result in the employer rejecting candidates not because of factors that are job-related, but because they use a particular browser. Whether this would produce the best results for the company is unclear.

Likely spurred at least in part by comments made by EEOC counsel at the FTC’s big data workshop in September 2014, the FTC’s report summarizes the potential ways that using “big data” tools can violate existing employment laws, such as Title VII of the Civil Rights Act of 1964, the Age Discrimination in Employment Act, the American with Disabilities Act and the Genetic Information Nondiscrimination Act. The report also includes a brief discussion of “disparate treatment” or “disparate impact” theories, concepts familiar to many employers.

According to the report, facially neutral policies or practices that have a disproportionate adverse effect or impact on a protected class create a disparate impact, unless those practices or policies further a legitimate business need that cannot reasonably be achieved by means that are less disparate in their impact. Consider the application above. Use of a particular browser seems to be facially neutral, but some might argue that selection results based on that correlation can have a disparate impact on certain protected classes. Of course, as the FTC report notes with regard to other uses of big data – a fact-specific analysis will be necessary to determine whether a practice causes a disparate impact that violates law.

Two other concerns discussed in the FTC’s report that have workplace implications include:

  • Biases in the underlying data. Big data is about the collection, compilation and analysis of massive amounts of data. If hidden biases exist in these stages of the process, “then some statistical relationships revealed by that data could perpetuate those biases.” Yes, this means “garbage in, garbage out.” The report provides a helpful example: a company’s big data algorithm only considers applicants from “top tier” colleges to help them make hiring decisions. That company may be incorporating previous biases in college admission decisions. Thus, it is critical to understand existing biases in data as they could undermine the usefulness of the end results.
  • Unexpectedly learning sensitive information. Employers using big data can inadvertently come into possession of sensitive personal information. The report describes a study which combined data on Facebook “Likes” and limited survey information to determine that researchers could accurately predict a male user’s sexual orientation 88 percent of the time, a user’s ethnic origin 95 percent of time, and whether a user was Christian or Muslim 82 percent of the time. Clearly, exposure to this information could expose an employer to claims that its hiring decisions were based on this information, and not other legitimate factors.

Companies can maximize the benefits and minimize the risks of big data, according to the FTC report, by asking the following questions:

  • How representative is your data set?
  • Does your data model account for biases?
  • How accurate are your predictions based on big data?
  • Does your reliance on big data raise ethical or fairness concerns?

There certainly is much to consider before using big data technology in the workplace, or for commercial purposes. As big data applications become more widespread and cost efficient, employers may feel the need to use it to remain competitive. They will need to proceed cautiously, however, and understand the technology, the data collected and whether the correlations work and work ethically.

 

 

LexBlog