For years now, state laws have required subject organizations to provide notification to affected data subjects and, in some instances, to state agencies, consumer reporting agencies, and the media, when they experience a “breach” of certain categories of information.  And a growing number of states – including California, Colorado, Connecticut, Maryland, Massachusetts, Texas, and, most recently, New York – have gone a step further, requiring subject organizations to develop and implement “reasonable safeguards” to secure the personal information they collect and use.  With the passage of the California Consumer Privacy Act (“CCPA”), California is poised to establish the next frontier in U.S. privacy and data security law.

The CCPA, which is set to take effect on January 1, 2020, imposes on subject organizations not only the obligation to secure data, and to provide notification in the event of a breach, but also an obligation to develop programs to manage the sweeping suite of rights that the CCPA grants to consumers (a category which, as we’ve previously discussed, will likely include employees (at least in certain circumstances)).

The CCPA, which follows in the footsteps of the European Union’s GDPR, has already inspired the proposal of similar legislation in other states – such as Hawaii, Maryland, Massachusetts, Mississippi, New Mexico, and Rhode Island – as well as at the federal level.

Access & Portability

One significant right the CCPA grants consumers is the right to request information regarding:

  • the categories of personal information businesses collect about them:
    • identifiers – e.g. real name, address, social security number
    • characteristics of protected classification under California or Federal law;
    • Commercial information – e.g. products purchased, records of personal property
    • Biometric information
    • Internet or other electronic network activity – e.g. browsing history, search history
    • Geolocation data
    • Audio, visual, and similar information
    • Profession or employment related information;
  • the sources from which that personal information was collected (e.g., online order histories, online surveys, tracking pixels, cookies, web beacons);
  • the categories of personal information sold to third parties;
  • the categories of personal information disclosed for business purposes;
  • the categories of third parties to whom personal information was sold or disclosed (e.g., tailored advertising partners, affiliates, social media websites, service providers);
  • the business or commercial purposes for which personal information was collected or sold (e.g., fraud prevention, marketing, improving customer experience); and
  • the “specific pieces” of personal information collected.

The CCPA imposes a one-year lookback period from the time of the request, and mandates that, in the event consumers request access to their personal information, the subject business provide responsive materials “in a readily usable format that allows consumers to transmit [the] information from one entity to another without hindrance.”

Deletion

Subject to certain exceptions (e.g., to complete to the transaction for which the personal information was collected; to protect against malicious, deceptive, fraudulent, or illegal activity; or to identify and repair errors that impair existing and intended functionality), the CCPA permits consumers to request that subject businesses delete – and direct service providers to delete – personal information collected about them.

Opt Out

Under the CCPA, consumers are empowered to opt out of the “sale” of their personal information.  To facilitate consumers’ exercise of this right, subject businesses are required to provide a link titled “Do Not Sell My Personal Information” to a web page where consumers can opt out of having their personal information sold to third parties. Similarly, Nevada recently enacted a new online privacy law requiring businesses to offer consumers the right to opt out of the “sale” of their personal information, effective October 1, 2019.

 Non-Discrimination

To protect consumers who exercise their rights under the CCPA, the law generally prohibits subject businesses from charging different prices or rates to consumers, providing different services to them, or denying them goods or services, because they exercised their CCPA rights.  That said, businesses are permitted to charge different prices or rates, or to provide different levels or qualities of goods or services, if those differences “reasonably relate” to the value provided to the consumer by the consumer’s data. Additionally, businesses may, under certain circumstances, offer financial incentives to consumers to entice them to permit the collection, retention, and/or sale of their information.

Privacy Policy

The CCPA requires subject businesses to disclose, and facilitate the exercise of, the above-discussed rights in their privacy policies.  Specifically, businesses should update their existing policies, or develop new polices, to include the following elements:

  • a description of the new rights afforded consumers under the CCPA;
  • a list of the categories of personal information collected by the business in the preceding 12 months;
  • a list of the categories of personal information sold or disclosed for a business purpose in the preceding 12 months;
  • a link to a “Do Not Sell My Personal Information” web-based opt-out tool;
  • a description of any financial incentives for providing data or not exercising rights (e.g., if the company offers a discount to consumers who provide their email addresses for marketing purposes, this incentive should be disclosed in the privacy policy); and
  • two or more designated methods for submitting information requests, including a toll-free number and a website address (if applicable).

Private Right Of Action

In contrast to many U.S. privacy and data security laws, the CCPA provides consumers a private right of action – albeit a limited one.  Specifically, the law empowers consumers to sue on their own behalves when a subject business’s failure to maintain “reasonable safeguards” results in the breach of their personal information.  Notably, the definition of personal information applicable to the private right of action is narrower than the definition used throughout the rest of the CCPA. A consumer can bring a private right of action under the CCPA only if the the following information is breached: an individual’s name along with his or her social security, driver’s license, or California identification card number; account, credit card, or debit card number, in combination with a code or password that would permit access to a financial account; or medical or health insurance information. While this private right of action does not extend to the rights discussed above – which will be subject to agency enforcement – even this limited private right will, if the recent flood of claims brought under the Illinois Biometric Information Privacy Act is any indication, result in a significant volume of class action litigation.

Takeaways

With the January 1, 2020 deadline less than four months away, subject businesses need to promptly evaluate whether they are prepared to effectively navigate the expansive array of rights the CCPA extends to consumers.  To do so, businesses will need to, among other things: (a) map the personal information about California residents that they collect, use, and sell; (2) design and document policies, procedures, and practices to manage disclosure, access, and deletion requests, and to avoid discriminatory conduct; and (3) train their workforce members to effectively comply with those policies, procedures, and practices.

One final point of note:  The CCPA has been a work in progress over the last year. California’s legislative session ended on September 13th, with some final modifications to bills that would amend certain aspects of the CCPA. Unanimously approved in final form, they now move on to California Governor Gavin Newsom for consideration and final action on the CCPA by mid-October.  We will continue to track these developments.

Most businesses in the insurance industry have one thing in common – they collect and maintain significant amounts of sensitive, nonpublic information including personal information. Not surprisingly, insurance-related businesses are a target of cyberattacks and a few have faced some of the largest data breaches reported to date. Beyond the headlines, however, small and mid-sized insurance companies face similar risks, and governments have stepped up their scrutiny of cybersecurity. Hearing the calls for legislation and regulation, the National Association of Insurance Commissioners (NAIC) adopted a Data Security Model Law with the goal of having it adopted in all states within a few years. So far, eight states (see below) have adopted a version of the Model Law and it looks like more are on the way.

What is the NAIC’s Data Security Model Law?

In an effort that largely began with establishing a task force in 2014, the NAIC adopted a Data Security Model Law in November 2017. The Model Law is intended to provide a benchmark for any cybersecurity program. The requirements in the Model Law track some familiar data security frameworks, such as the HIPAA Security Rule. It also has many similarities to the New York State Department of Financial Services (NYDFS) regulations (specifically the 23 NYCRR 500). Note that licensees are not subject to the Model Law unless the state where that licensee is licensed adopts a version of the Model Law. At that time, the licensee must comply with that law.

Who is Subject to the Model Law?

The Model Law generally applies to “Licensees,” defined as:

any person licensed, authorized to operate, or registered, or required to be licensed, authorized, or registered pursuant to the insurance laws of this State but shall not include a purchasing group or a risk retention group chartered and licensed in a state other than this State or a Licensee that is acting as an assuming insurer that is domiciled in another state or jurisdiction.

Licensees range from large insurance carriers to small independent adjusters. These include individuals providing insurance related services, firms such as agency and brokerage businesses, and insurance companies. Additionally, there may be business that require a license, but are not traditionally considered to be in the insurance business. Examples include car rental companies and travel agencies that offer insurance packages in connection with their primary business.

The Model Rule provides exceptions for certain licensees. For example, licensees with fewer than ten employees (including independent contractors) are exempt from the requirement to maintain an information security program. However, they remain subject to the other provisions in the Model Law, such as the requirement to provide notification in the case of certain cybersecurity events.

What are some of the requirements of the Model Law? Continue Reading Licensed by Your State’s Insurance Commissioner? Comprehensive Data Security Requirements Are Headed Your Way

On Thursday, New York Governor Andrew Cuomo signed into law the Stop Hacks and Improve Electronic Data Security Act (SHIELD Act), sponsored by Senator Kevin Thomas and Assemblymember Michael DenDekker. The SHIELD Act, which amends the State’s current data breach notification law, imposes more expansive data security and data breach notification requirements on companies, in the hope of  ensuring better protection for New York residents from data breaches of their private information. The SHIELD Act takes effect on March 21, 2020. Governor Cuomo also signed into law the Identity Theft Prevention and Mitigating Services Act that requires credit reporting agencies that face a breach involving Social Security numbers to provide five years of identity theft prevention and mitigation services to affected consumers. It also gives consumers the right to freeze their credit at no cost. This law becomes effective in 60 days.

Below are several FAQs highlighting key features of the SHIELD Act:

What is Private Information under the SHIELD Act?

Unlike other state data breach notification laws, New York’s original data breach notification law included definitions for “personal information” and “private information.” The current definition of “personal information” remains: “any information concerning a natural person which, because of name, number, personal mark, or other identifier, can be used to identify such natural person.” However, the SHIELD Act expands the definition of “private information” which sets forth the data elements that, if breached, could trigger a notification requirement. Under the amended law, “private information” means either:

  • personal information consisting of any information in combination with any one or more of the following data elements, when either the data element or the combination of personal information plus the data element is not encrypted, or is encrypted with an encryption key that has also been accessed or acquired:
    • social security number;
    • driver’s license number or non-driver identification card number;
    • account number, credit or debit card number, in combination with any required security code, access code, password or other information that would permit access to an individual’s financial account; account number, credit or debit card number, if circumstances exist wherein such number could be used to access an individual’s financial account without additional identifying information, security code, access code, or password; or
    • biometric information, meaning data generated by electronic measurements of an individual’s unique physical characteristics, such as a fingerprint, voice print, retina or iris image, or other unique physical representation or digital representation of biometric data which are used to authenticate or ascertain the individual’s identity; OR
  • a user name or e-mail address in combination with a password or security question and answer that would permit access to an online account.

It is worth mentioning that the SHIELD Act’s expansive definition of “private information” is still not as broad as the definition of the analogous term under the laws of other states. For example, California, Illinois, Oregon, and Rhode Island have expanded the applicable definitions in their laws to include not only medical information, but also certain health insurance identifiers.

How has the term “breach of security of the system” changed?

The SHIELD Act alters the definition of “breach of the security of the system” in two significant ways. First, it broadens the circumstances that qualify as a “breach” by including within the definition of that term incidents that involve “access” to private information, regardless of whether they resulted in “acquisition” of that information. Under the old law, access absent acquisition did not qualify as a breach. In connection with this change, the amendments also add several factors for determining whether there has been unauthorized access to private information, including “indications that the information was viewed, communicated with, used, or altered by a person without valid authorization or by an unauthorized person.”

Second, as discussed above, the expansion of the definition of private information effectively expands the situations which could result in a breach of the security of the system.  Notably, the SHIELD Act retains the “good faith employee” exception to the definition of “breach.”

Are there any substantial changes to data breach notification requirements? And who must comply?

Any person or business that owns or licenses computerized data which includes private information of New York residents must comply with breach notification requirements, regardless of whether the person or business conducts business in New York.

That said, there are several circumstances which would exempt a business from the breach notification requirements. For example, notice is not required if “exposure of private information” was an “inadvertent disclosure and the individual or business reasonably determines such exposure will not likely result in misuse of such information, or financial harm to the affected persons or emotional harm in the case of unknown disclosure of online credentials”. Further, businesses that are already regulated by and comply with data breach notice requirements under certain applicable state or federal cybersecurity laws (e.g., HIPAA, NY DFS Reg. 500, Gramm-Leach-Bliley Act) are not required to further notify affected New York residents, however, they are still required to notify the New York State Attorney General, the New York State Department of State Division of Consumer Protection, and the New York State Division of the State Police.

What are the “reasonable” data security requirements? And who must comply with them?

As with the notification requirements, the SHIELD Act requires that any person or business that owns or licenses computerized data which includes private information of a resident of New York must develop, implement and maintain reasonable safeguards to protect the security, confidentiality and integrity of the private information. Again, businesses in compliance with laws like HIPAA and the GLBA are considered in compliance with this section of the law. Small businesses are subject to the reasonable safeguards requirement, however safeguards may be “appropriate for the size and complexity of the small business, the nature and scope of the small business’s activities, and the sensitivity of the personal information the small business collects from or about consumers.” A small business is considered any business with fewer than fifty employees, less than $3 million in gross annual revenue in each of the last 3 years, or less than $5 million in year-end total assets.

The law provides examples of practices that are considered reasonable administrative, technical and physical safeguards. For example, risk assessments, employee training, selecting vendors capable of maintaining appropriate safeguards and implementing contractual obligations for those vendors, and disposal of private information within a reasonable time period, are all practices that qualify as reasonable safeguards under the law.

Are there penalties for failing to comply with the SHIELD Act?

The SHIELD Act does not authorize a private right of action, and in turn class action litigation is not available. Instead, the Attorney General may bring an action to enjoin violations of the law and obtain civil penalties. For data breach notification violations that are not reckless or knowing, the court may award damages for actual costs or losses incurred by a person entitled to notice, including consequential financial losses. For knowing and reckless violations, the court may impose penalties of the greater of $5,000 dollars or up to $20 per instance with a cap of $250,000. For reasonable safeguard requirement violations, the court may impose penalties of not more than $5,000 per violation.

Conclusion

The SHIELD Act has far reaching effects, as any business that holds private information of a New York resident – regardless of whether that organization does business in New York – is required to comply. “The SHIELD Act will put strong safeguards in place to curb data breaches and identity theft,” said Justin Brookman, Director of Privacy and Technology Policy for Consumer Reports. The SHIELD Act signifies how seriously New York, like other states across the nation, is taking privacy and data security matters.  Organizations, regardless of their location, should be assessing and reviewing their data breach prevention and response activities, building robust data protection programs, and investing in written information security programs (WISPs).

The California Consumer Privacy Act (CCPA), which goes into effect January 1, 2020, is considered the most robust state privacy law in the United States. The CCPA seems to have spurred a flood of similar legislative proposals on the state level, and it was only a matter of time before the Empire State introduced its own version of the law. The New York Privacy Act (NYPA), s5642, introduced last month by New York Senator Kevin Thomas, the Chair of the Consumer Protection Committee, is considered a more expansive version of its California counterpart.

Similar to the CCPA, the NYPA would provide consumers with greater control over their personal data, and impose substantial duties on businesses that control and process data, however the NYPA is distinct from the CCPA in significant ways. Below are several key features of the NYPA:

  • Application: Unlike the CCPA, which only applies to businesses with a threshold of $25 million annual revenue, the NYPA applies to “legal entities that conduct business in New York” or that produce products or services that “intentionally target” New York residents. This means that small-to-medium size businesses, and potentially even not-for-profit organizations will be subject to the law’s privacy and security obligations. Organizations exempted include state and local governments, and personal data that is regulated by HIPAA, HITECH, GLBA and notably, “data sets maintained for employment records purposes”.
  • Consumer Rights: The NYPA provides consumers a broad set of rights over their personal data. Consumer rights include: the right to access, the right to rectification, right to delete, right to stop processing and right to have data portability.   This extends the rights afforded to consumers by the CCPA, as the CCPA does not include a right to rectification.
  • Privacy and Security Obligations: Under the NYPA, covered businesses would be required to “exercise the duty of care, loyalty and confidentiality . . . with respect to securing the personal data of a consumer against a privacy risk; and shall act in the best interests of the consumer, without regard to the interests of the entity, . . . in a manner expected by a reasonable consumer under the circumstances.” In addition businesses are required to “reasonably secure personal data from unauthorized access” and “promptly” notify consumers of a breach. Finally, the law prevents businesses from using personal data in a way that “(i) benefits an online service provider to the detriment of an end user; (ii) would result in reasonably foreseeable physical or financial harm to a consumer; or (iii) would be unexpected and “highly offensive” to a “reasonable consumer.”
  • Enforcement: The New York State Attorney General may bring an action in the name of the state, or on behalf of residents of the state, however a private right of action is also available to any person injured by reason of violation of the law. If passed, this enforcement provision would likely create an influx of litigation. A similar cause of action exists under an Illinois privacy law that you might have heard about, the Illinois Biometric Information Privacy Act or “BIPA.” That provision has resulted in flood of litigation, including putative class actions, seeking to recover statutory damages for plaintiffs who allege their biometric information has been collected and/or disclosed in violation of the statute. This is arguably the most significant difference between the CCPA. Despite several attempts to expand the private right of action, in its current form the CCPA only allows for a private right of action in very limited circumstances, if a nonencrypted or nonredacted personal information is subject to an unauthorized access, exfiltration, theft, or disclosure because the covered business did not meet its duty to implement and maintain reasonable safeguards to protect that information.

The NYPA is still in the very early stages of the legislative process – it has only been reviewed by the Senate’s Consumer Protection Committee, and is still looking for a co-sponsor from the state Assembly. Nonetheless, such an aggressive bill signifies the seriousness in which New York is considering privacy and security matters.  Organizations, regardless of their location, should be assessing and reviewing their data collection activities, building robust data protection programs, and investing in written information security programs (WISPs).

 

The GDPR is wrapping up its first year and moving full steam ahead. This principles-based regulation has had a global impact on organizations as well as individuals. While there continue to be many questions about its application and scope, anticipated European Data Protection Board guidance and Data Protection Authority enforcement activity should provide further clarity in the upcoming year. In the meantime, here are a few frequently asked questions – some reminders of key principles under the GDPR and others addressing challenges for implementation and what lies ahead.

Can US organizations be subject to the jurisdiction of the GDPR?

Whether a US organization is subject to the GDPR is a fact-based determination. Jurisdiction may apply where the US organization has human or technical resources located in the EU and processes EU personal data in the context of activities performed by those resources. In cases where the US organization does not have human or technical resources located in the EU, it may be subject to the GDPR’s jurisdiction in two instances: if the organization targets individuals in the EU (not businesses) by offering goods or services to them, regardless of whether payment is required, or if it monitors the behavior of individuals in the EU and uses that personal data for purposes such as profiling (e.g. website cookies, wearable devices). The GDPR may also apply indirectly to a US organization through a data processing agreement.

If we execute a data processing agreement, does that make our US organization subject to the GDPR?

When an organization subject to the GDPR engages a third party to process its EU data, the GDPR requires that the organization impose contractual obligations on the third party to implement certain GDPR-based safeguards. If you are not otherwise subject to the GDPR, executing a data processing agreement will not directly subject you to the GDPR. Instead, it will contractually obligate you to follow a limited, specific set of GDPR-based provisions. Your GDPR-based obligations will be indirect in that they are contractual in nature.

Does the GDPR apply only to the data of EU citizens?

No, the GDPR applies to the processing of the personal data of data subjects who are in the EU regardless of their nationality or residence.

Is our organization subject to the GDPR if EU individuals access our website and make purchases?

If your organization does not have human or technical resources in the EU, the mere accessibility of your website to EU visitors, alone, will not subject you to the GDPR. However, if your website is designed to target EU individuals (e.g. through features such as translation to local language, currency converters, local contact information, references to EU purchasers, or other accommodations for EU individuals) your activities may be viewed as targeting individuals in the EU and subject you to the GDPR.

Are we required to delete an individual’s personal data if they request it?

If your organization is subject to the GDPR, an individual may request that you delete their personal data. However, this is not an absolute right. Your organization is not required to delete the individual’s personal data if it is necessary

  • for compliance with a legal obligation or the establishment, exercise or defense of a legal claim
  • for reasons of public interest (e.g. public health, scientific, statistical or historical research purposes)
  • to exercise the right of freedom of expression or information
  • where there is a legal obligation to keep the data
  • or where you have anonymized the data.

Additional consideration should be given to any response when the individual’s data is also contained in your back-ups.

GDPR principles have started to influence law in the U.S. In fact, many have been watching developments regarding the California Consumer Privacy Act (CCPA), which shares a right to delete as it pertains to the personal information of a California resident. Similar to the GDPR, it is not an absolute right and in certain cases an exception may apply. For instances, both law contain an exception from the right to have personal information deleted when the information is needed to comply with certain laws.

Does the GDPR apply to an EU citizen who works in the US?

If your organization is not subject to the GDPR and you hire an EU citizen to work in the US, the GDPR may not apply to the processing of their personal data in the US. However, depending on the circumstances, the answer may be different if the EU citizen is in the US on temporary assignment from an EU parent. In that scenario, their data may be subject to the GDPR if the US entity’s relationship with the parent creates an establishment in the EU, and it processes this data in the context of the activities of that establishment. To the extent the EU parent transfers the EU employee’s personal data from the EU to the US entity, that transfer may require EU-US Privacy Shield certification, the execution of binding corporate rules, or standard contractual clauses. These measures are designed to ensure data is protected when it is transferred to a country, such as the US, that is not deemed to have reasonable safeguards.

Do we need to obtain an EU individual’s consent every time we collect their personal data?

If your organization is subject to the GDPR and processes an EU individual’s information, you must have a “legal basis” to do so. Consent is just one legal basis. In addition to consent, two of the most commonly used legal basis are the “legitimate interests” of your organization and the performance of a contract with the individual. A legitimate interest is a business or operational need that is not outweighed by the individual’s rights (e.g. processing personal data for website security, conducting background checks, or coordinating travel arrangements). Processing necessary to the performance of a contract is activity that enables you to perform a contract entered into with the individual (e.g. processing employee data for payroll pursuant to the employment contract or processing consumer data for shipping goods under a purchase order.)

Should we obtain an employee’s consent to process their personal data?

Continue Reading The GDPR – One Year and Counting

As we noted last month, Washington’s efforts to follow California’s lead in passing its own GDPR-like law have stalled after the bill failed to make its way through the state’s House of Representatives—despite overwhelming approval in the Senate (where it passed 46-1).  That bill’s sponsor has promised to revisit the issue during the 2020 legislative session.

Despite this roadblock on the consumer privacy front, Washington governor Jay Inslee signed a bill on May 7 (HB 1071) significantly expanding the state’s data breach notification law, RCW 19.255.01, et seq.  There was little doubt that Governor Inslee would sign the bill into law, as it passed unanimously in both state legislative bodies.

Below is a summary of major changes to the state’s data breach notification law, and key takeaways for employers subject to Washington law.  For a detailed explanation of the law’s new provisions—which will become effective March 1, 2020—please refer to this post.

Deadline to provide notice of breach shortened to thirty (30) days following discovery.

Under the current law (and until HB 1071’s amendments become effective on March 1, 2020), notice of a breach must be provided within 45 days of discovery. With the amendments, notice must be provided no more than thirty days after the organization discovers the breach. This applies to notices sent to affected consumers as well as to the state’s Attorney General. The threshold requirement for notice to the Attorney General remains the same—it is only required if 500 or more Washington residents were affected by the breach.

Thirty days may still sound like plenty of time, but it can often take several days, or even weeks, for an entity to determine the scope of a breach and compile a list of potentially affected consumers. And if the breach affected residents of more than one state, each state’s laws must be examined to ensure that the notices sent to each individual comport with the breach notification laws of that individual’s state of residence.

Definition of “personal information” significantly expanded.

The previous definition tracked the language used by the majority of states, and only covered breaches that included an individual’s first name (or initial) and last name, plus any one or more of the three “bare minimum” data elements— Social Security number, driver’s license or state ID number, and/or financial account or card number (with an access code or password that would permit access thereto).

With the amendment, Washington adds the following six additional data elements that will be considered “personal information” if combined with an individual’s first name or initial and last name:

  • Full date of birth;
  • Unique private key used to authenticate or sign an electronic record;
  • Passport, military, or student ID number;
  • Health insurance policy or identification number;
  • Information about a consumer’s medical history, physical or mental health condition, or diagnosis or treatment by a health care professional; and,
  • Biometric data (such as fingerprint or retina scans, voiceprints, or other unique biological patterns used to identify an individual).

Significantly, Washington law now considers an individual’s username (or email address) and password (or security questions sufficient to permit access to an account) to be “personal information” regardless of whether the individual’s name is included. Notice to affected consumers of a breach of this type may be provided electronically or by email (unless the affected account was the individual’s email account).

In addition, the new law provides that even without an individual’s first name or initial and last name, any one or more of the other data elements will be considered “personal information” if the element, or combination of elements, would permit a person to commit identity theft against the individual, and the data element(s) were not rendered unusable though encryption, redaction or other methods.

Finally, as discussed more thoroughly in this post, HB 1071 also added notice requirements for affected consumers and the Attorney General—though notice to the Attorney General is still not required unless 500 or more Washington residents were affected by the breach.

There are several takeaways for employers here:

  • First, employers must be aware of the types of data elements the organization maintains on its employees (or other individuals, such as customers or clients), how that data is maintained, and what happens to that data when it is no longer needed.
  • Employers should also examine the necessity of maintaining certain types of data, and consider narrowing the scope of data elements that the organization maintains by ceasing to collect and maintain unnecessary data—even if not currently listed in the state’s definition of “personal information.”
  • Until now, Washington employers may not have been overly concerned with securing certain types of data, such as an employee’s date of birth or health insurance policy number. But once HB 1071’s amendments take effect, that information could trigger breach notification duties if subject to unauthorized access or disclosure.
  • Finally, employers should ensure the organization has sound policies in place specifically to deal with sensitive data (e., “personal information”) deemed necessary to maintain.

Texans like the adage “Everything is Bigger in Texas”. So, as the Lone Star State follows its counterparts and the federal government in discussing broad sweeping privacy protections, legislators introduced two (competing) privacy bills this session: the Texas Consumer Privacy Act and the Texas Privacy Protection Act.

Readers should note that the 2019 Texas Legislative Session is set to end on May 27, 2019, although a special session may be called to address items not resolved during the regular session. If privacy legislation is not passed, state lawmakers would not consider it again until 2021, as the legislature only meets every other year, for 140 days. If either of the bills were to pass this session, the effective date could be as early as September 2020.

Even if neither bill passes this session, which is likely the case given the legislative hurdles that must happen within the limited timeframe, privacy as an issue is not going away in Texas (or anywhere else for that matter). And, given that Texas is the second largest economy in the U.S., any privacy legislation will have a big impact. The current prediction is that Texas will take a back seat to watch how California enacts the CCPA, and (hopefully) learn from some of its pain points in order to adopt legislation in 2020.

Nevertheless, below is an overview of the two pending bills in their current form.

Texas Consumer Privacy Act (“TXCPA”)

The TXCPA is similar to the California Consumer Protection Act (“CCPA”). It provides Texas consumers with rights to:

  • Know what information is being collected, distributed and sold about them;
  • Opt-out of sales of their information, including a requirement that businesses include a “Do Not Sell My Information” link on their website; and
  • Request that their information be deleted.

The TXCPA would also require businesses subject to the act to:

  • Provide notification of categories of personal information collected and how each category would be used;
  • Provide an online privacy policy or notice; and
  • Provide methods for consumers to submit data requests and disclose certain information in response to such requests.

It also borrows concepts from the EU GDPR around transparency and notice.

Similar to the CCPA, there are questions about how the bill would define a consumer and whether it would be applied to employees. Like the CCPA, the TXCPA also provides rights to households, but this is currently not well defined. The TXCPA does not establish a business duty to implement and maintain security procedures, nor does it allow a private cause of action for consumers in the event of a breach. The Texas Attorney General would enforce violations, set at an amount up to $2,500 per violation (and $7,500 for intentional violations).

In its current form, the TXCPA would only apply to certain businesses, including those that collect consumer personal information. These types of businesses would also have to meet certain thresholds.

Texas Privacy Protection Act (HB 4390)

The TXPPA distinguishes itself from the TXCPA with applicability and its level of detail. It also does not provide the same consumer rights as the TXCPA. For the TXPPA to apply, a business must be:

  • Doing business in Texas
  • Have more than 50 employees
  • Collect personally identifiable information (“PII”) of more than 5,000 individuals, households or devices (or have this information collected on its behalf); this only applies to the collection of PII over the Internet or digital network, or through a computing device that is associated with a specific end user. This requirement is not only to “Texas residents” meaning an Internet business with only a handful of customers in Texas, but numerous customers elsewhere, may be subject to the law.
  • And either:
    • Have an annual gross revenue of more than $25 million; or
    • Derive 50% of more of its annual revenue from the processing of PII.

The traditional PII categories, like social security number, driver’s license number, credit card or financial account information, etc. are expanded under the TXPPA to include biometric information, religious affiliation, racial or ethnic origin information, unique genetic information, physical or mental health information, precise geolocation data and the private communications or other user-created content of an individual that is not publicly available.

The TXPPA requires the explicit permission from the individual from whom the information pertains, unless processing is required by law. A business may only process PII if it is relevant to accomplish the purpose for which it is to be processed, and this must be specified by notice prior to the collection. Processing also may not violate state or federal law or infringe on an individuals’ Constitutional rights or privileges. The TXPPA also gives individuals the right to access their PII and the right to be forgotten.

TXPPA requires impacted businesses to establish and maintain a comprehensive security program that contains safeguards for PII, although there is not a lot of guidance in the current bill on this. Like the TXCPA, there is no private cause of action for a breach of duty to protect PII. Businesses would also be liable when a service provider mishandles their data.

Also like the TXCPA, the Texas Attorney General may bring an action and recover civil penalties, but they are higher under the TXPPA – up to $10,000 per violation, not to exceed a total of $1 million.

Either bill, if passed into law, would keep Texas in line with other states currently enhancing their privacy and security laws to keep up with the California Consumer Privacy Act set to take effect January 1, 2020.  Organizations across the United States should be assessing and reviewing their data collection activities, building robust data protection programs, and investing in written information security programs (WISPs).

 

It was looking like Washington state would be the first state to follow the California Consumer Privacy Act (CCPA), with a GDPR-like law of its own. That effort has stalled, perhaps temporarily. However, both Washington’s House and Senate voted unanimously to send HB 1071 to Gov. Jay Inslee, which would substantially expand the state’s current data breach notification obligations.

Here are some of the highlights:

Definition of personal information. Following many other states, the new law would add to the data elements that if breached could trigger a notification obligation. Currently, personal information includes an individual’s first initial or first name and last name, together with one or more of the following – (i) Social Security number, (ii) Driver’s license number or Washington identification card number; or (iii) Account number or credit or debit card number, in combination with any required security code, access code, or password that would permit access to an individual’s financial account.

The following elements would be added to the list:

  • Full date of birth;
  • Private key unique to an individual and that is used to authenticate or sign an electronic record;
  • Student, military, or passport identification number;
  • Health insurance policy number or health insurance identification number;
  • Any information about a consumer’s medical history or mental or physical condition or about a health care professional’s medical diagnosis or treatment of the consumer; or
  • Biometric data generated by automatic measurements of an individual’s biological characteristics such as a fingerprint, voiceprint, eye retinas, irises, or other unique biological patterns or characteristics that is used to identify a specific individual;
  • Username or email address in combination with a password or security questions and answers that would permit access to an online account.

In addition, these elements (other than online account credentials) could be considered personal information even without the consumer’s first name or first initial and last name. That would be the case if encryption, redaction, or other methods have not be applied to render the element(s) unusable and the element(s) would enable a person to commit identity theft against a consumer.

Special Rule for Online Accounts. To combat the practice of many who use the same username and password for different accounts (note to reader, if this is you, stop reading this post and go change your account credentials), the new law would require notifications to provide some direction on this point. Specifically, when a breach involves a username or password, notice may be provided electronically or by email, and must inform affected persons to promptly change his or her password and security question or answer, as applicable. The notice should inform affected persons to take other appropriate steps to protect the online account and all other online accounts for which the affected person uses the same username or email address and password or security question or answer.

The new law goes a step further when the person or business providing the notice also furnished the email account to the affected person. In that case, notification must be provided using a permissible method other than email to that account, and must also include the information noted above for changing passwords for at risk accounts.

Notice Timing and Content. Like other state breach notification laws, Washington’s law requires notification be provided in the most expedient time possible and without unreasonable delay. Current law provides, however, that notice may not be provided later than forty-five calendar days following discovery. The new law reduces that period to thirty calendar days both for notice to individuals as well as to the Attorney General.

Importantly, the new law retains the exceptions to the notification period – notice may be delayed at the request of law enforcement or if due to measures necessary to determine the scope of the breach and restore the reasonable integrity of the data system. It is not clear if these exceptions also apply for notifying the Attorney General.

When notification is required, the new law adds to existing content requirements by mandating that notifications include, if known, the time frame of exposure – the date of the breach and the date of the discovery of the breach. Additional information also must be provided under the new law to the Attorney General, but under existing law that notice is required only if more than 500 persons are affected by the breach.

If enacted, the law changes in HB 1071 provide good examples of the need for organizations to continue to monitor these developments and revisit their incident response plans (IRPs). For example, some organizations may get caught off guard by the expanding definition of personal information under these laws. Date of birth typically is not included as an element of personal information in most other states (North Dakota is one exception). Having out of date template letters also can minimize the effectiveness of the organizations IRP.

As we reported, in late February, California Attorney General Xavier Becerra and Senator Hannah-Beth Jackson introduced Senate Bill 561, legislation intended to strengthen and clarify the California Consumer Privacy Act (CCPA). This week, the Senate Judiciary Committee referred the bill to the Senate Appropriations Committee by a vote of 6-2. This move came despite concerns raised about the scope of the amendment’s expanded private right of action. It is worth noting that a restricted private right of action is believed to have been fundamental to the compromise that led to the CCPA becoming law.

If SB 561 becomes law, it would make a number of significant changes to the current law. In particular, SB 561 would significantly expand the scope of the private right of action presently written into the CCPA. In its current form, the CCPA provides consumers a private right of action if their nonencrypted or nonredacted personal information is subject to an unauthorized access, exfiltration, theft, or disclosure because the covered business did not meet its duty to implement and maintain reasonable safeguards to protect that information. The amendment proposed under SB 561 broadens this provision to grant consumers a private right of action if their rights under the CCPA are violated.

This could become very costly for businesses subject to CCPA. A plaintiff suing under CCPA can recover statutory damages in an amount not less than $100 and not greater than $750 per incident or actual damages, whichever is greater, as well as injunctive or declaratory relief and any other relief the court deems proper. With the change under SB 561, violations of rights under the statute, such as rights to certain notifications or the right to have certain information deleted upon request potentially could trigger statutory damages,

A similar cause of action exists under an Illinois privacy law that you might have heard about, the Illinois Biometric Information Privacy Act or “BIPA.” That provision has resulted in a flood of litigation, including putative class actions, seeking to recover statutory damages for plaintiffs who allege their biometric information has been collected and/or disclosed in violation of the statute.

According to reports, while Senator Jackson promised to work with stakeholders to address concerns about an expanded private right of action, the lawmaker apparently is intent on maintaining the ability for consumers whose CCPA privacy rights are violated to sue, without having to rely on the Attorney General’s office to enforce the CCPA.

UPDATE: As discussed below, SB2134, as introduced, would have amended BIPA to delete the language that creates a private right of action and provide, instead, that violations resulting from the collection of biometric information by an employer for employment, human resources, fraud prevention, or security purposes would be subject to the enforcement authority of the Department of Labor. But, to survive, SB 2134 needed to be reported out of committee by March 28, 2019. That did not happen. Again, businesses should continue their efforts to comply with the requirements of BIPA.

Many businesses currently are defending a wave of class action lawsuits filed under the Illinois’ Biometric Information Privacy Act, popularly known as “BIPA” ).  The floodgates to litigation were opened earlier this year when the Illinois Supreme Court ruled that individuals need not allege actual injury or adverse effect, beyond a violation of his/her rights under BIPA, in order to qualify as an “aggrieved” person and be entitled to seek liquidated damages, attorneys’ fees and costs, and injunctive relief under the Act.  Potential damages are substantial as the BIPA provides for statutory damages of $1,000 per negligent violation or $5,000 per intentional or reckless violation of the Act. The majority of BIPA suits have been brought as class actions seeking statutory damages on behalf of each individual affected, exposing businesses to potentially crushing damages.

In February, SB2134 was introduced and would amend BIPA to delete the language that creates a private right of action. If enacted, the amendment would provide, instead, that violations resulting from the collection of biometric information by an employer for employment, human resources, fraud prevention, or security purposes would be subject to the enforcement authority of the Department of Labor. The amendment would permit employees and former employees to file a complaint with the DOL, provided they are filed within one year from the date of the violation. Violations of BIPA that constitute a violation of the Consumer Fraud and Deceptive Business Practices Act would be enforced by the Attorney General. If the amendment is enacted, the changes would be effective immediately. Of course, it is unclear what the effect would be for pending litigation.

We expect businesses will be watching developments concerning SB2134 closely, which is currently is in committee. However, businesses should continue their efforts to comply with the requirements of BIPA, which do not appear to be included in the changes being proposed in SB2134.