As wearable and analytics technology continues to explode, professional sports leagues, such as the NFL, have aggressively pushed into this field. (See Bloomberg). NFL teams insert tiny chips into players shoulder pads to track different metrics of their game. During the 2018-2019 NFL season, data was released that Ezekiel Elliot ran 21.27 miles per hour for a 44-yard run, his fastest of the season. The Dallas Cowboys are not alone as all 32 teams throughout the league can access this chip data which is collected via RFID tracking devices. Sports statistics geeks don’t stand a chance as this technology will track completion rates, double-team percentages, catches over expectation, and a myriad of other data points.

There are obvious questions and concerns about the use of this technology, and not just at the professional level. Wearables can be found at all levels of sports and athletic activities, including at colleges and high schools. At the professional level, the NFL is unique in that it allows teams to use the chip data during contract negotiations. However, players do not have full access to this information, unless specifically granted by individual teams. This is important since there is much debate over who truly owns this data. And, for a variety of reasons, players and athletes want to know where their information is stored, how it is stored, whether and how it might be used and disclosed, who has access to it, and what safeguards are in place to protect it. Major League Baseball and the Players Association added Attachment 56 to the 2017-2021 Collective Bargaining Agreement to address some of these concerns. But, again, these and other questions are not unique to professional ball players.

See the source imageWith devices ranging from wearable monitors to clothing and equipment with embedded sensors, professional teams, colleges and universities, local school districts, and other sports and athletic institutions, as well as the companies that provide the wearables, can now collect massive amounts of data such as an athlete’s heart rate, glucose level, breathing, gait, strain, or fatigue. On the surface, this data may relate to an athlete’s performance and overall wellness, which may be somewhat apparent to onlookers without the aid of the device. However, alone or aggregated, the data may reveal more sensitive personal information relating to the athlete’s identity, location, or health status, information that cannot be obtained just by closely observing the individual. When organizations collect, use, share, or store this data, it creates certain privacy and security risks and numerous international, federal, and state data protection laws may apply. Any sports or athletic organization that develops a wearable device program, or has reason to believe that these devices are being used by coaches and others to collect similar data, should be mindful of these risks and regulatory issues.

Below is a non-exhaustive list of some of these laws: Continue Reading As Wearable Technology Booms, Sports and Athletic Organizations at all Levels Face Privacy Concerns

A few weeks back a company’s watch list containing nearly 2.5 million individuals and entities considered “high-risk” for its clients was mistakenly leaked to the public. A “high-risk” entity in this circumstance was one potentially linked to organized crime or terrorism. The leak resulted from an unsecured and incorrectly configured company database.

Typically in the news we hear of data breaches involving a leak of personal information including social security numbers, medical information or credit card numbers. Moreover, state data breach notifications and reasonable safeguard laws generally create an affirmative obligate to protect against and respond to a data breach involving personal information. For example, under California data security law a business that owns, licenses or maintains personal information must implement and maintain reasonable security procedures and practices appropriate to the nature of the information. Similarly, under New Jersey data breach notification law, any business that conducts business in New Jersey, or any public entity that compiles or maintains computerized records that include personal information, shall disclose any breach of security of those computerized records following discovery or notification of the breach to any customer who is a resident of New Jersey whose personal information was, or is reasonably believed to have been, accessed by an unauthorized person.  The definition of personal information under state data breach notification and reasonable safeguard laws commonly includes the following types of data: (i) Social Security number; (ii) driver’s license number or state issued ID card number; or (iii) account number, credit card number or debit card number combined with any security code, access code, PIN or password needed to access an account. Moreover, some states have broader definitions of personal information which can include other types of data such as biometric data, passport numbers or medical information. Note that this type of data is unlike the information involved in the “watch list” incident mentioned above.

Despite media and legislative focus on data breaches of personal information, there are other types of sensitive data that when breached can have a detrimental impact on an organization. An organization can face a data breach involving leaked confidential business information, trade secrets, organizational strategies or financial information, just to name a few. As a result it is important for an organization to have safeguards in place to protect any data it deems of value, whether personal information or otherwise, even if there is no affirmative obligation under the law to do so. Strong IT safeguards are part of the solution, but not a silver bullet. Administrative and physical safeguards also are needed, such as access management policies, awareness training, equipment inventory, and vendor assessment and management programs. No organization is immune to a data breach, and preparedness can make all the difference in both preventing a breach, and responding if one does occur.

Below are a few of our helpful resources for preventing and responding to a data breach:

 

Yesterday, the U.S. Supreme Court rejected a petition for a writ of certiorari by Zappos requesting the Court to review a Ninth Circuit Court decision which allowed customers affected by a data breach to proceed with a lawsuit on grounds of vulnerability to fraud and identity theft. The ruling stems from a 2012 breach that affected over 24 million Zappos customers, which included hackers accessing customer’s names, account numbers, passwords, email addresses, billing and shipping addresses, phone numbers, and the last four digits of the credit cards.

In March of 2018, the Ninth Circuit Court reversed a decision by the United States District Court for the District of Nevada that tossed claims brought by customers affected by the data breach who claimed that the breach left them in “imminent” risk, because they did not allege having already suffered financial losses. A three-judge Ninth Circuit panel held that sensitivity of the information stolen in the breach — including credit card numbers and other means to commit fraud or theft — led them to conclude the customers had adequately alleged an injury. “Plaintiffs allege that the type of information accessed in the Zappos breach can be used to commit identity theft, including by placing them at higher risk of ‘phishing’ and ‘pharming,’ which are ways for hackers to exploit information they already have to get even more PII,” the panel wrote.

Businesses facing class action litigation following a data breach have long waited for the Supreme Court to weigh in on the issue of whether a demonstration of actual harm is required to have standing to sue. Federal circuit courts over the past few years have struggled with this issue, in large part due to lack of clarity following the U.S. Supreme Court’s decision in Spokeo, Inc. v. Robins which held that even if a statute has been violated, plaintiffs must demonstrate that an “injury-in-fact” has occurred that is both concrete and particularized, but which failed to clarify whether a “risk of future harm” qualifies as such an injury. For example, the 3rd6th, 7th,  9th  and D.C. circuits have generally found standing, while the 1st2nd4th and 8th circuits have generally found no standing where a plaintiff only alleges a heightened “risk of future harm”.

In its appeal to the Supreme Court, Zappos argued that “the factual scenario this case presents – a database holding customers’ personal information is accessed, but virtually no identity theft or fraud results – is an increasingly common one”. The rejection by the Supreme Court of the Zappos petition is considered a setback for companies facing similar litigation. Moreover, the California Consumer Privacy Act, set to take effect in 2020, authorizes a private cause of action against a covered business for damages resulting from a failure to implement appropriate security safeguards which result in a data breach, and the Illinois Supreme Court recently held that actual harm was not required to sue under the Illinois Biometric Information Privacy Law (“BIPA”).  The Supreme Court did not provide a reason for its denial of the Zappos petition, nonetheless its decision coupled with these state initiatives, is likely to have a significant impact on data breach class action lawsuits going forward.

Add Washington D.C. Attorney General Karl A. Racine’s recent data security legislative proposal – the Security Breach Protection Amendment Act of 2019 – to the growing list of states and jurisdictions across the country seeking to strengthen privacy and security protections around personal information.

Proposed in response to major data breaches, a frequent catalyst to stronger data privacy and security legislation, AG Racine’s bill would expand legal protections concerning personal information to help prevent and enhance the response to a data breach. Specifically, the bill would:

  1. like legislation being considered in New Jersey, expand the definition of personal information that, if breached, would require notification. However, if passed, the definition of personal information in D.C. would be much broader than New Jersey and many other states, and include – passport numbers, taxpayer identification numbers, military ID numbers, health information, biometric data, genetic information and DNA profiles, and health insurance information;
  2. require businesses that experience a data breach to include specific information in the notifications to affected persons, such as (i) the categories of information that were, or are believed to have been, involved in the breach, (ii) contact information for the person making the notification, as well as the credit reporting agencies, the FTC, and the D.C. Attorney General, and (iii) the right under federal law to obtain a security freeze at no cost and how to obtain such a freeze; and
  3. mandate businesses offer two years of free identity theft protection when a breach involves Social Security numbers. Washington D.C. would join states such as Connecticut, Delaware, and, in April, Massachusetts, in requiring such services be provided following certain breaches.

The bill also would mandate that businesses that handle personal information implement reasonable safeguards to protect that data. Additionally, businesses that obtain services from a nonaffiliated third party and disclose personal information of a DC resident under an agreement with that third party must require the third party by agreement to safeguard that information. Again, these changes put D.C. in the company of other states such as California, Colorado, and Massachusetts.

The legislative screws continue to tighten around data privacy and security.

The California Consumer Privacy Act (CCPA), which goes into effect January 1, 2020, is considered the most expansive state privacy law in the United States. Organizations familiar with the European Union’s General Data Protection Regulation (GDPR), which became effective on May 25, 2018, certainly will understand CCPA’s implications. Perhaps the best known comprehensive privacy and security regime globally, GDPR solidified and expanded a prior set of guidelines/directives and granted individuals certain rights with respect to their personal data. The CCPA seems to have spurred a flood of similar legislative proposals on the state level.

Since the start of 2019, at least six state legislatures have already introduced privacy laws mirrored largely on the CCPA.   Below are some of the highlights of each state legislative proposal:

  • Hawaii – SB 418, introduced on January 24 by two Democrat senators, the Hawaiian bills contains similar consumer rights and requirements for businesses as the CCPA. The current bill text does not include a definition for “business”. Although this will likely be remedied, if left as is, the Hawaiian bill would have a broader reach than the CCPA, which only applies to entities that do business in the state of California.
  • Maryland SB0613, introduced on February 4 by Senator Susan Lee (D), includes similar consumer rights as those in the CCPA, but its right of deletion (popularly known as the “right to be forgotten”) is more extensive as it limits the circumstances under which an organization can deny such a request. Also notable, the bill prohibits discrimination against a consumer for exercising his/her rights and financial incentives for processing personal information.
  • Massachusetts – SD.341, presented by Senator Cynthia Creem in early February, this proposal combines key aspects of the CCPA together with aspects of Illinois’s Biometric Information Privacy Act (BIPA). This bill would allow Massachusetts consumers a private right of action if their personal information or biometric information (referred to separately in the bill) is improperly collected. Moreover, similar to the Illinois Supreme Court’s recent holding regarding the BIPA, under the proposed bill, Massachusetts consumers may not have to demonstrate actual harm to seek damages.
  • Mississippi – HB 2153, a house bill that was quickly squashed, was the closest in structure to the CCPA, pulling direct language from the California law. Although the Mississippi bill did not succeed, it still signifies how state legislators across the U.S. are considering consumer privacy.
  • New Mexico – SB176, introduced on January 19 by Senator Michael Padilla (D), attempts to balance consumer privacy without stifling “innovation and creativity” of companies. Although language differs, key components of the CCPA are present in the New Mexico bill (g. right of access, right of deletion, right to opt out, private right of action).

In addition to the CCPA-like proposals discussed above, other states are also considering unique ways to enhance consumer data privacy for their residents. For example, New York legislators recently introduced at least 4 different consumer privacy related bills, including one on biometric privacy (SB 547) and another that would regulate businesses’ collection and disclosure of personal information (S00224).  And several North Dakota legislators, in mid-January, introduced a consumer privacy bill, HB 1485, exclusively focused on the prohibition of disclosure of an individual’s personal information without “express written consent”.

Finally, a group of senators in Washington State, in January, introduced the “Washington Privacy Act,” SB 5376 (WPA). That bill would establish more GDPR-like requirements on businesses that collect personal information related to Washington residents. In addition to requirements for notice, and consumer rights such as access, deletion, and rectification, the WPA would impose restrictions on use of automatic profiling and facial recognition.

This state level activity could prompt Congress to move more quickly with one of its proposed bills, the latest being the Data Care Act, which proposes to hold large tech companies, specifically “online service providers”, responsible for the protection of personal information. Much of the private sector, including the Internet Association, comprised of the leading tech companies, is pushing for a federal approach to consumer privacy to prevent the “patchwork of state laws” that has arisen in the area of data breach notification law.  Not even three months in, 2019 is already gearing up to be a busy year for consumer privacy law.

 

According to reports, bank customers in Australia (yes, data breach notification requirements exist down under) have been affected by “an industry-wide” data breach experienced by a third-party service provider to the banks – property valuation firm, LandMark White. As expected, the banks are investigating and in some cases notifying customers about the incident. However, there are reports that some of the affected banks are suspending this vendor from the group of valuation firms they use. This is not an unusual reaction by organizations whose third party service providers have or are believed to have caused a data breach affecting the organization’s customers, patients, students, employees, etc. But, it is worth thinking about whether that is the best course of action.

In the United States, there is a growing number of states that require businesses to contractually bind their third party services providers to maintain reasonable safeguards to protect personal information made available to the third parties to perform services. For example, under the Illinois Personal Information Protection Act:

A contract for the disclosure of personal information concerning an Illinois resident that is maintained by a data collector must include a provision requiring the person to whom the information is disclosed to implement and maintain reasonable security measures to protect those records from unauthorized access, acquisition, destruction, use, modification, or disclosure.

Personal information under this law includes information such as name coupled with Social Security number, drivers license number, medical information, and unique biometric data used to authenticate an individual, such as a fingerprint, retina or iris image, or other unique physical representation or digital representation of biometric data. In connection with obtaining written assurances from a third party vendor, many companies engage their vendors in an assessment process to get a better sense of the security of the vendor’s environment. Assessments can take many forms including interviewing the vendor’s chief information security officer, reviewing policies and procedures, subjecting the vendor to a detailed security questionnaire, penetration tests, and more. When organizations think of best practices for data security, assessment procedures of some kind certainly should be on the list.

But after the assessments and contract negotiations are completed, data breaches can still happen. In many cases, when a third party vendor experiences a breach affecting personal information, the owners of that information are the vendor’s customers. Uncomfortable as it may be, breach notification laws generally require the vendor maintaining the breached personal information to notify the owner, the vendor’s customer(s). At that point, the parties typically work through the incident response process, which in many cases could be driven by contract, although many agreements are silent on this issue.

In any event, organizations will almost invariably begin to think about whether this is a vendor they want on their team going forward. Of course, there are a number of reasons that might support terminating the relationship, such as:

  • The vendor may not have been protecting the information they way it should have under the contract and applicable law, resulting in the breach.
  • The vendor has not been transparent, responsive, or cooperative with the organization during the incident response process.
  • The vendor has not taken sufficient steps to ensure a similar breach will not happen again.
  • The organization is getting pressure from its customers who are serviced or supported, in part, by the vendor.
  • The organization has been unhappy with the vendor for some time (unrelated to the breach) and this is the last straw.

However, there also are reasons for maintaining the relationship, which include:

  • “The grass is always greener on the other side” – it may not be. There is no guarantee that a new vendor will have greater data security, be able to avoid a sophisticated attack, or be willing to work with the owner of the data as transparently as the current vendor.
  • The current vendor arguably is “battle-tested” with data security and incident response more top of mind.
  • There is a long-standing, trusted relationship with the vendor whose products and/or services are too important to the organization.
  • Both the organization and the vendor may be more inclined following a breach to collaborate on enhanced security measures and incident response planning.

The author takes no position here on whether to stay or go, as such a decision requires consideration of a number of factors. Third party service providers play important roles for many organizations, and their selection and continued utilization are decisions that should be made following an appropriate level of due diligence and analysis.

 

Yesterday, California Attorney General Xavier Becerra and Assemblymember Marc Levine (D-San Rafael)announced Assembly Bill 1130 which is intended to strengthen California’s existing data breach notification law. In short, AB 1130 would amend the existing law to include passport numbers and biometric information (e.g., fingerprint and retina scan data) in the definition of personal information, so that, if breached under the law, notification to consumers would be required.

Currently, similar to most breach notification laws in other states, California’s data breach notification law defines personal information to include a covered person’s first name (or first initial) and last name coupled with sensitive information such as Social Security numbers, driver’s license numbers, financial account numbers and health information. The changes under AB 1130 would keep California out in front of other states, although a number of other states, such as Illinois, already include data such as biometric information as personal information under their breach notification laws. As many have observed, these state by state changes only add to the complexity businesses face when they experience a data breach affecting individuals in multiple states.

News reports concerning the announcement of AB 1130 note that Attorney General Xavier Becerra “has promised to crack down on companies that try to hide data breaches from the public.” And soon individuals in California affected by a data breach likely will have expanded rights to sue under the California Consumer Privacy Act (CCPA). As we reported earlier, the CCPA authorizes a private cause of action against a covered business for damages resulting from a failure to implement appropriate security safeguards which result in a data breach. The CCPA incorporates much of the definition of personal information under the California breach notification law. What should be troubling for covered businesses is that, if successful, a plaintiff can recover damages in an amount not less than $100 and not greater than $750 per incident or actual damages, whichever is greater, as well as injunctive or declaratory relief and any other relief the court deems proper. Thus, in addition to the costs of notifications a covered business may have to incur under the state’s breach notification law, which could include providing ID theft and credit monitoring services, class action lawsuits brought pursuant to this provision of the CCPA could be very costly. The expansion of the definition of personal information to include passport and biometric data only increases these risks.

The U.S. Supreme Court may finally weigh in on the hottest issue in data breach litigation, whether a demonstration of actual harm is required to have standing to sue. Standing to sue in a data breach class action suit, largely turns on whether plaintiffs establish that they have suffered an “injury-in-fact” resulting from the data breach. Plaintiffs in data breach class actions are often not able to demonstrate that they have suffered financial or other actual damages resulting from a breach of their personal information. Instead, plaintiffs will allege that a heightened “risk of future harm” such as identity theft or fraudulent charges is enough to establish an “injury-in-fact”.

Federal circuits court over the past few years have struggled with the question whether plaintiffs in a data breach class action can establish standing if they only allege a heightened “risk of future harm”.  For example, the 3rd6th, 7th,  9th  and D.C. circuits have generally found standing, while the 1st2nd4th5th, and 8th circuits have generally found no standing where a plaintiff only alleges a heightened “risk of future harm”. This circuit court split is in large part to due to lack of clarity following the U.S. Supreme Court’s decision in Spokeo, Inc. v. Robins which held that even if a statute has been violated, plaintiffs must demonstrate that an “injury-in-fact” has occurred that is both concrete and particularized, but which failed to clarify whether a “risk of future harm” qualifies as such an injury.

The U.S. Supreme Court may finally weigh in on the status of standing in data breach litigation this term, in Frank v. Gaos. The Court recently requested supplemental briefs addressing whether any of the name plaintiffs has standing such that federal courts have Article III jurisdiction over the dispute. The Court’s request is particularly notable, as the issue before the Court was not initially focused on standing. Although Frank is not a classic data breach case, rather a privacy class action settlement based on unauthorized sharing of website search terms to third-parties, it may still provide the Court an opportunity to resolve the circuit split and issue further guidance on standing in data breach litigation.

Similarly, the Illinois Supreme Court recently held that actual harm was not required to sue under the Illinois Biometric Information Privacy Law (“BIPA”), likely to increase the already large number of suits, including putative class actions, filed under the law. It goes without saying that the U.S. Supreme Court’s decision in Frank v. Gaos could have significant impact on data breach class action lawsuits.

 

A new bill in the Senate proposes to hold large tech companies, specifically “online service providers”, responsible for the protection of personal information in the same way banks, lawyers and hospitals are held responsible. The Data Care Act of 2018, which was introduced on December 12, 2018, is designed to protect users information online and penalize companies that do not properly safeguard such data.

Personal data under the bill includes:

  • Social Security number,
  • Driver’s license number,
  • Passport or military identification number
  • Financial account number, credit or debit card number with the access code or password necessary to permit access to the financial account
  • Unique biometric data, including a fingerprint, voice print, retina image or other unique physical representation
  • Account information such as user name and password or email address and password
  • First and last name of an individual or first initial and last name, in combination with data of birth.

The bill would also protect personal information from being sold or disclosed unless the end user agrees.

The bill is seen as part of a broader push to enact federal privacy legislation, in part to prevent more states from enacting their own privacy legislation, similar to recent moves in California and Illinois.

The bill was introduced by Senator Brian Schatz (D-HI), the Ranking Member of the Communications, Technology, Innovation, and the Internet Subcommittee. The bill was co-sponsored by 14 Senate Democrats.

Senator Schatz stated in a press release that people “have a basic expectation that the personal information they provide to websites and apps is well-protected and won’t be used against them. Just as doctors and lawyers are expected to protect and responsibly use the personal data they hold, online companies should be required to do the same.”

The bill would be defined and enforced by the Federal Trade Commission. It would establish three basic duties, including the duty of care, the duty of loyalty and the duty of confidentiality. If passed, the FTC would go through the normal notice and comment rulemaking process to further establish how authorities will define, implement and enforce concepts like “reasonable” security measures.

There have been no shortage of federal initiatives seeking heightened protection for consumer personal data in the past couple of years, in particular since enactment of the EU’s GDPR, and it’s only a matter of time before one of them finally sticks. We will continue to report on the Data Care Act of 2018 and other similar initiatives as developments unfold.