When privacy geeks talk “privacy,” it is not uncommon for them to use certain terms interchangeably –personal data, personal information, personally identifiable information, private information, individually identifiable information, protected health information, or individually identifiable health information. They might even speak in acronyms – PI, PII, PHI, NPI, etc. Blurring those distinctions might be OK for casual conversation, but as organizations develop data privacy and security compliance programs, the meanings of these terms can have significant consequences. A good example exists within the California Consumer Privacy Act (“CCPA”) and its interaction with other laws.

The CCPA, effective January 1, 2020, contains an expansive definition of “personal information.” See Cal. Civ. Code Sec. 1798.140(o). The basic definition is information that identifies, relates to, describes, is reasonably capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or household. The definition goes on to enumerate, without limitation, certain categories of information (e.g., identifiers, website activity, biometric information, geolocation) if they identify, relate to, describe, are reasonably capable of being associated with, or could be reasonably linked, directly or indirectly, with a particular consumer or household. With respect to this broad set of data, the CCPA extends to California consumers substantial rights, including the right to request deletion of that data or to opt-out of its sale.

The CCPA’s private right of action for data breaches, however, applies to a much narrower subset of “personal information” defined above. Specifically, the CCPA incorporates another section of California law, Cal. Civ. Code Sec. 1798.81.5(d)(1)(A), to define personal information that, if breached, and which the owner failed to reasonably safeguard, could expose the owner to statutory damages of up to $750 per person. For this purpose, personal information means:

An individual’s first name or first initial and the individual’s last name in combination with any one or more of the following data elements…:

(i) Social security number.

(ii) Driver’s license number, California identification card number, tax identification number, passport number, military identification number, or other unique identification number issued on a government document commonly used to verify the identity of a specific individual.

(iii) Account number or credit or debit card number, in combination with any required security code, access code, or password that would permit access to an individual’s financial account.

(iv) Medical information.

(v) Health insurance information.

(vi) Unique biometric data generated from measurements or technical analysis of human body characteristics, such as a fingerprint, retina, or iris image, used to authenticate a specific individual.

Note also that the CCPA excludes certain information from its general definition of personal information, such as “protected health information” maintained by covered entities and business associates under the Health Insurance Portability and Accountability Act (“HIPAA”).

But the PI, PII, PHI…conundrum does not end with the CCPA. An organization with CCPA obligations also may maintain “private information” of New York residents. Under the New York Stop Hacks and Improve Electronic Data Security Act (“SHIELD Act”), that organization would have to adopt reasonable safeguards to protect “private information” which is defined to mean, in general, any information concerning a natural person which, because of an identifier, can be used to identify such natural person if it is in combination with any one or more of the following data elements:

  • social security number;
  • driver’s license number or non-driver identification card number;
  • account number, or credit or debit card number, which alone or together with a required code would permit access to an individual’s financial account;
  • biometric information, meaning data generated by electronic measurements of an individual’s unique physical characteristics, such as a fingerprint, voice print, retina or iris image, or other unique physical representation or digital representation of biometric data which are used to authenticate or ascertain the individual’s identity.

Private information also includes a user name or e-mail address in combination with a password or security question and answer that would permit access to an online account.

Confused yet? Perhaps your organization is not subject to the CCPA or the NY SHIELD Act, but you own and operate a website that collects personal information from consumers who reside in California and Delaware. Laws in those states require a website private policy that describes certain practices concerning “personally identifiable information” defined in Delaware to mean:

any personally identifiable information…collected online by the operator…from that user…including a first and last name, a physical address, an e-mail address, a telephone number, a Social Security number, or any other identifier that permits the physical or online contacting of the user, and any other information concerning the user collected by the operator…from the user and maintained in personally identifiable form in combination with any identifier described in this paragraph.

A similar definition exists under the California law. These distinctions just scratch the surface and add to the complexity of the emerging patchwork of data privacy and security law in the United States.

So, when thinking about personal information, it is important to remember that not only does the definition extend beyond just one’s name and social security number, but the term itself and its definition likely will differ depending on the particular statutes or regulations you are analyzing. When assessing an organization’s threats and vulnerabilities to personal information, or preparing policies and procedures to safeguard it, be sure to develop an appropriate definition that takes into account the necessary elements of data.

After years of data breaches, mass data collection, identity theft crimes, and failed attempts at broad-based federal legislation, 2020 may be the year that state privacy and data security legislation begins to take hold in the U.S. For example, the California Consumer Privacy Act (“CCPA”) and the New York Stop Hacks and Improve Electronic Data Security Act (“SHIELD Act”), both effective in 2020 and with application outside their respective states, are already spurring more active compliance efforts. This rapidly developing area of law presents a dizzying challenge for “compliance” personnel whose plates are already filled with an alphabet soup of regulation. The challenge tends to fall particularly hard on in-house counsel and human resources professionals and their IT counterparts whose teams (many times of only one or two) are frequently spread too thin.

The CCPA and SHIELD Act are by no means the only laws on the books. Other state legislatures, such as New Jersey, are advancing comprehensive data privacy and security laws. And, of course, many states have enacted similar laws – all 50 states enacted data breach notification laws, several states (e.g., Colorado, Florida, Illinois, Maryland, Massachusetts, Nevada, Oregon) require businesses to have reasonable safeguards to protect personal information, including written contracts with vendors that access personal information. On top of that, certain organizations must comply with industry-specific federal mandates, such as the Health Insurance Portability and Accountability Act (“HIPAA”) and the Gramm-Leach-Bliley Act (“GLBA”), while others are balancing international regulation, the most popular one being the European Union’s General Data Protection Regulation (“GDPR”).

Meeting this challenge can seem overwhelming, but there are some strategies and best practices that can help in 2020 and beyond.

  1. Set expectations. Compliance is not a one-time endeavor. It is an on-going effort, a marathon, not a sprint. Building a strong compliance and risk management program is necessary, but it will take time, resources, and commitment. The support of organization leadership is critical, so get them on board, apprise them of the costs of building an achievable program, and the costs of doing nothing.
  2. Build your team. The data privacy and security challenge cannot be solved by the IT department alone. Technology safeguards are critical, but they do not replace strong administrative, physical, and organizational controls. In-house counsel and HR professionals should work on eliminating silos and push for an interdisciplinary team – sales, finance, R&D, marketing, operations, legal, HR, IT. Collectively, the team should have deep institutional knowledge; a strong understanding of the business, its need for and uses of data, and threats and vulnerabilities to data; an awareness of industry expectations, and the capacity to influence new practices and procedures for processing data.
  3. Maintain a Written Information Security Program. It is not enough to say, “We are doing that.” From a compliance perspective, data privacy and security policies and procedures need to be in writing. And, written policies and procedures also help to maintain consistency in the organization’s practices and better support discipline for violations of the rules.
  4. Vendors – trust but verify. Third-party vendors provide critical support to organizations often involving access to sensitive information. The idiom “a chain is no stronger than its weakest link” is quite appropriate considering many organizations have experienced data breaches because of their vendors’ security incidents. Organizations simply must have a better understanding of the strength of their vendors’ safeguards for protecting information. They should maintain strong vendor management programs that begin to apply at procurement and continue until the service agreement terminates and the organization’s data is secured.
  5. Communications About Your Program Should be Accurate and Accessible. Increasingly, the law requires organizations to post website statements summarizing their data privacy and security practices. Examples include HIPAA and laws in California, Delaware, and Nevada. These statements should be accurate and accessible. Inaccurate statements, such as those that overstate security safeguards, can lead to deceptive trade practice claims. As required by the CCPA and urged by the flood of litigation under Title III of the Americans with Disabilities Act, the statements also need to be accessible to persons with disabilities.
  6. Know the Law and Stay in Touch. An organization’s compliance team need not and should not be comprised of lawyers. But it should maintain a keen awareness of applicable legal mandates and a general sense of where the law is headed as it relates to the organization. Active participation in trade and similar associations can be particularly helpful, as can subscribing to dedicated legal resources, blogs, etc.
  7. Training and Awareness. Employees falling victim to phishing attacks is one of the most frequent causes of a data breach. Regular, role-based training on the organization’s policies and procedures along with general security awareness training can substantially reduce this and other data risks.
  8. Embrace technology…carefully. The latest devices and software applications can benefit the organization’s business enormously. However, they may not have been developed or designed with data privacy and security in mind, or at least as needed to address the organization’s compliance needs. Consider biometric technologies that tout stronger identity verification for applications such as POS system access and worker time management. If not rolled out or configured carefully, these devices can cause significant legal exposure relating to the collection, disclosure, and destruction of personal information.
  9. Less is more. Some organizations pride themselves on their comprehensive recordkeeping systems, for example, claiming to have retained all records since inception. Such practices may not be necessary, and in many cases are not prudent. Retaining massive amounts of data may be needed in certain contexts, but it should be carried out strategically and deliberately, with a plan to shed the data once its usefulness has ceased.
  10. Be reasonable. Perhaps this should be first on the list. But it is last to serve as a reminder that whatever steps are taken, they should be reasonable. Indeed, most regulatory data privacy and security frameworks require “reasonable” safeguards. Of course, this is not easy to define, but reasonableness should be a fundamental principle guiding your program.

 

With 2020 poised to bring more acuity to the direction of privacy and security law in the U.S., adopting some or all of the above strategies and best practices will help support a strong, adaptive, ongoing, and reasonable privacy and information security program.

State and local governments have increasingly become targets of cybersecurity attacks. This year cybersecurity attacks on Baltimore and Lincoln County, North Carolina reportedly will cost those government entities $18.2 million and as much as $400,000, respectively to recover from the attacks. Last year, Atlanta spent more than $7 million to recover from a ransomware attack. A report by cybersecurity firm Coveware shows that governments paid almost 10 times as much money on average in ransom as their private-sector counterparts over the second quarter of 2019.

Recognizing this risk, Massachusetts Governor Charlie Baker announced a new program to help cities and towns develop strategies to prevent cyberattacks. “The more capable the public realm becomes, the greater the challenges and the greater the risks associated with trust,” Baker said. “We need to do things to help.”

During the first Massachusetts Cybersecurity Week, at the state’s third annual Cybersecurity Forum capstone event, Governor Baker introduced an expansive cybersecurity program, including statewide workshops for municipalities to work together to enhance their cybersecurity capabilities, which will be lead by the MassCyberCenter at the MassTech Collaborative.

Governor Baker discussed the “smart” future – a world of smart buildings, autonomous cars and smart communities that is not too far away, and emphasized that states and municipalities need to be prepared. “We have a long way to go in the public sector to digitize our assets. I don’t think that’s a really big surprise to anybody in this room,” Baker said at a recent State House event, addressing a group of 200 executives from the private, public, and R&D sectors.

Baker’s Cybersecurity Program complements a similar program led by the National Governors Association (NGA), announced in July, in which the NGA will collaborate with cyber-related state agencies to help improve cybersecurity strategies in the public sector across the nation. Massachusetts was one of seven states selected by the NGA for the first phase of this program, to help develop an action plan and identify key priorities in cybersecurity.

Cyberattacks continue to be a major risk for private companies as well. Coveware reported that the average size of private companies targeted by ransomware in the second quarter of 2019 was 925 employees. . McAfee Labs reported that ransomware attacks grew by 118% in the first quarter of 2019. Government entities and private companies alike should conduct risk assessments to develop appropriate security measures to protect them from the risk of cyberattacks.

This cybersecurity program is just another example of how Massachusetts continues to lead the way for other states on privacy and security matters. Check out other Massachusetts initiatives discussed on the blog:

 

Illinois continues to lead the way in privacy and security legislation. The Prairie State is home to the Biometric Information Privacy Act, first of its kind legislation regulating the collection and possession of biometric information, and also the Personal Information Protection Act, considered one of the more expansive data breach notification laws in the nation. And now, in what has been described as “the momentous legislative session in decades”, the Illinois state legislature unanimously passed the Artificial Intelligence Video Interview Act (“the AIVI Act”), HB2557, which imposes consent, transparency and data destruction requirements on employers that implement AI technology during the job interview process. The AIVI Act, the first state law to regulate AI use in video interviews, will take effect January 1, 2020.

Below are several key obligations the AIVI Act imposes on employers:

  • Notification – The employer must notify the job applicant that AI will be used during the video interview for the purpose of analyzing the applicant’s facial expressions and consider the applicant’s fitness for the position. An applicant must also be provided with an information sheet prior to interview detailing how AI works and the characteristics it uses to evaluate applicants.
  • ConsentEmployers must obtain written consent from any applicant that is evaluated by an AI program. It is worth noting that an employer is not required to consider an applicant that refuses to provide consent for the use of AI.
  • Limitations on AI Use – An employer may not use AI to evaluate applicants who have not consented to the use of AI analysis. In addition an employer may not share applicant videos, except with persons whose expertise is necessary in order to evaluate an applicant’s fitness for a position.
  • Data Destruction – If an applicant requests the destruction of a video interview, the employer must comply within 30 days upon receiving the request. Further, the employer must instruct all persons that have received a copy of the applicant’s video interview to destroy the footage.

The AIVI Act does not contain a “definitions” section, and is vague on several key matters. For example, the law is silent on penalties and enforcement, and there is no definition of AI or guidance on how notification should be provided. AI use in the hiring process is still in its early stages and the AIVI Act will likely be amended as necessary, particularly as the practice becomes more commonplace.

While there is no other state legislation to serve as a comparison, as early as 2014 the EEOC has been taking notice of “big data” technologies and the potential that the use of such technology may be in violation of existing employment laws such as Title VII of the Civil Rights Act of 1964, the Age Discrimination in Employment Act, the American with Disabilities Act and the Genetic Information Nondiscrimination Act. While the EEOC does not yet have an official policy on AI-based tools in the workplace, it has emphasized that the employer must assess the benefits of AI-based tools against increased exposure and risk of privacy and security issues. For more on the EEOC’s stance on AI, check out this interesting podcast episode with Dr. Romella El Kharzazi of the EEOC, “The EEOC and AI Based Assessments – the Inside Scoop” on the podcast Science 4-Hire.

Only time will tell the impact the AIVI Act will have on employment practices. But if the AIVI Act is treated in a similar manner to the BIPA, which the Illinois Supreme Court has held does not require a showing of actual injury to sue, employers should tread carefully with AI usage in the workplace. Moreover, it will likely not be long before other states enact similar legislation. Employers, regardless of jurisdiction, should be evaluating their hiring practices and procedures, particularly to ensure that written consent is obtained before the use of any technology that collects the sensitive information of a job applicant or employee.

On February 21, 2019, California Attorney General Xavier Becerra and Assemblymember Marc Levine (D-San Rafael) announced Assembly Bill 1130 which intended to strengthen and expand California’s existing data breach notification law. On September 11, 2019, the bill passed both houses of the legislature and was presented to Governor Gavin Newsom. Last Friday, October 11, 2019, the Governor signed AB 1130, together with 6 additional California Consumer Privacy Act of 2018 (“CCPA”) related bills into law.

Prior to AB 1130, California’s breach notification law defined personal information in Cal Civil Code Sec. 1798.81.5(d)(1)(A) to include a covered person’s first name (or first initial) and last name coupled with sensitive personal information such as Social Security numbers, driver’s license numbers, financial account numbers, and medical and health information. AB 1130 expands the types of personal information in that section to include biometric information (i.e. fingerprint, retina scan data, iris image) and government identifiers (i.e. tax identification number, passport number, military identification number).

In addition to expanding the elements of personal information that are subject to a notification obligation in the event of a data breach, the change also increases litigation risk following a data breach. This is because, under the CCPA, consumers affected by a data breach can bring an action for statutory damages when the breach is caused by the business’ failure to maintain reasonable safeguards. And, the CCPA specifically incorporates Civil Code Sec. 1798.81.5(d)(1)(A), which AB 1130 expanded. Now, a broader set of personal information that, if breached and not reasonably safeguarded, could expose businesses subject to the CCPA to substantial damages. A consumer can recover damages in an amount not less than $100 and not greater than $750 per incident or actual damages, whichever is greater, as well as injunctive or declaratory relief and any other relief the court deems proper.

Thus, in addition to the costs of notifications a covered business may have to incur under the state’s breach notification law, which could include providing ID theft resolution and credit monitoring services, class action lawsuits brought pursuant to this provision of the CCPA could be very costly. The expansion of the definition of personal information to include biometric information and government identifiers only increases these risks. It would be prudent for businesses subject to the CCPA to ensure reasonable safeguards are in place to protect all of these elements of personal information, and make sure their third-party service providers are doing the same.

For years now, state laws have required subject organizations to provide notification to affected data subjects and, in some instances, to state agencies, consumer reporting agencies, and the media, when they experience a “breach” of certain categories of information.  And a growing number of states – including California, Colorado, Connecticut, Maryland, Massachusetts, Texas, and, most recently, New York – have gone a step further, requiring subject organizations to develop and implement “reasonable safeguards” to secure the personal information they collect and use.  With the passage of the California Consumer Privacy Act (“CCPA”), California is poised to establish the next frontier in U.S. privacy and data security law.

The CCPA, which is set to take effect on January 1, 2020, imposes on subject organizations not only the obligation to secure data, and to provide notification in the event of a breach, but also an obligation to develop programs to manage the sweeping suite of rights that the CCPA grants to consumers (a category which, as we’ve previously discussed, will likely include employees (at least in certain circumstances)).

The CCPA, which follows in the footsteps of the European Union’s GDPR, has already inspired the proposal of similar legislation in other states – such as Hawaii, Maryland, Massachusetts, Mississippi, New Mexico, and Rhode Island – as well as at the federal level.

Access & Portability

One significant right the CCPA grants consumers is the right to request information regarding:

  • the categories of personal information businesses collect about them:
    • identifiers – e.g. real name, address, social security number
    • characteristics of protected classification under California or Federal law;
    • Commercial information – e.g. products purchased, records of personal property
    • Biometric information
    • Internet or other electronic network activity – e.g. browsing history, search history
    • Geolocation data
    • Audio, visual, and similar information
    • Profession or employment related information;
  • the sources from which that personal information was collected (e.g., online order histories, online surveys, tracking pixels, cookies, web beacons);
  • the categories of personal information sold to third parties;
  • the categories of personal information disclosed for business purposes;
  • the categories of third parties to whom personal information was sold or disclosed (e.g., tailored advertising partners, affiliates, social media websites, service providers);
  • the business or commercial purposes for which personal information was collected or sold (e.g., fraud prevention, marketing, improving customer experience); and
  • the “specific pieces” of personal information collected.

The CCPA imposes a one-year lookback period from the time of the request, and mandates that, in the event consumers request access to their personal information, the subject business provide responsive materials “in a readily usable format that allows consumers to transmit [the] information from one entity to another without hindrance.”

Deletion

Subject to certain exceptions (e.g., to complete to the transaction for which the personal information was collected; to protect against malicious, deceptive, fraudulent, or illegal activity; or to identify and repair errors that impair existing and intended functionality), the CCPA permits consumers to request that subject businesses delete – and direct service providers to delete – personal information collected about them.

Opt Out

Under the CCPA, consumers are empowered to opt out of the “sale” of their personal information.  To facilitate consumers’ exercise of this right, subject businesses are required to provide a link titled “Do Not Sell My Personal Information” to a web page where consumers can opt out of having their personal information sold to third parties. Similarly, Nevada recently enacted a new online privacy law requiring businesses to offer consumers the right to opt out of the “sale” of their personal information, effective October 1, 2019.

 Non-Discrimination

To protect consumers who exercise their rights under the CCPA, the law generally prohibits subject businesses from charging different prices or rates to consumers, providing different services to them, or denying them goods or services, because they exercised their CCPA rights.  That said, businesses are permitted to charge different prices or rates, or to provide different levels or qualities of goods or services, if those differences “reasonably relate” to the value provided to the consumer by the consumer’s data. Additionally, businesses may, under certain circumstances, offer financial incentives to consumers to entice them to permit the collection, retention, and/or sale of their information.

Privacy Policy

The CCPA requires subject businesses to disclose, and facilitate the exercise of, the above-discussed rights in their privacy policies.  Specifically, businesses should update their existing policies, or develop new polices, to include the following elements:

  • a description of the new rights afforded consumers under the CCPA;
  • a list of the categories of personal information collected by the business in the preceding 12 months;
  • a list of the categories of personal information sold or disclosed for a business purpose in the preceding 12 months;
  • a link to a “Do Not Sell My Personal Information” web-based opt-out tool;
  • a description of any financial incentives for providing data or not exercising rights (e.g., if the company offers a discount to consumers who provide their email addresses for marketing purposes, this incentive should be disclosed in the privacy policy); and
  • two or more designated methods for submitting information requests, including a toll-free number and a website address (if applicable).

Private Right Of Action

In contrast to many U.S. privacy and data security laws, the CCPA provides consumers a private right of action – albeit a limited one.  Specifically, the law empowers consumers to sue on their own behalves when a subject business’s failure to maintain “reasonable safeguards” results in the breach of their personal information.  Notably, the definition of personal information applicable to the private right of action is narrower than the definition used throughout the rest of the CCPA. A consumer can bring a private right of action under the CCPA only if the the following information is breached: an individual’s name along with his or her social security, driver’s license, or California identification card number; account, credit card, or debit card number, in combination with a code or password that would permit access to a financial account; or medical or health insurance information. While this private right of action does not extend to the rights discussed above – which will be subject to agency enforcement – even this limited private right will, if the recent flood of claims brought under the Illinois Biometric Information Privacy Act is any indication, result in a significant volume of class action litigation.

Takeaways

With the January 1, 2020 deadline less than four months away, subject businesses need to promptly evaluate whether they are prepared to effectively navigate the expansive array of rights the CCPA extends to consumers.  To do so, businesses will need to, among other things: (a) map the personal information about California residents that they collect, use, and sell; (2) design and document policies, procedures, and practices to manage disclosure, access, and deletion requests, and to avoid discriminatory conduct; and (3) train their workforce members to effectively comply with those policies, procedures, and practices.

One final point of note:  The CCPA has been a work in progress over the last year. California’s legislative session ended on September 13th, with some final modifications to bills that would amend certain aspects of the CCPA. Unanimously approved in final form, they now move on to California Governor Gavin Newsom for consideration and final action on the CCPA by mid-October.  We will continue to track these developments.

Most businesses in the insurance industry have one thing in common – they collect and maintain significant amounts of sensitive, nonpublic information including personal information. Not surprisingly, insurance-related businesses are a target of cyberattacks and a few have faced some of the largest data breaches reported to date. Beyond the headlines, however, small and mid-sized insurance companies face similar risks, and governments have stepped up their scrutiny of cybersecurity. Hearing the calls for legislation and regulation, the National Association of Insurance Commissioners (NAIC) adopted a Data Security Model Law with the goal of having it adopted in all states within a few years. So far, eight states (see below) have adopted a version of the Model Law and it looks like more are on the way.

What is the NAIC’s Data Security Model Law?

In an effort that largely began with establishing a task force in 2014, the NAIC adopted a Data Security Model Law in November 2017. The Model Law is intended to provide a benchmark for any cybersecurity program. The requirements in the Model Law track some familiar data security frameworks, such as the HIPAA Security Rule. It also has many similarities to the New York State Department of Financial Services (NYDFS) regulations (specifically the 23 NYCRR 500). Note that licensees are not subject to the Model Law unless the state where that licensee is licensed adopts a version of the Model Law. At that time, the licensee must comply with that law.

Who is Subject to the Model Law?

The Model Law generally applies to “Licensees,” defined as:

any person licensed, authorized to operate, or registered, or required to be licensed, authorized, or registered pursuant to the insurance laws of this State but shall not include a purchasing group or a risk retention group chartered and licensed in a state other than this State or a Licensee that is acting as an assuming insurer that is domiciled in another state or jurisdiction.

Licensees range from large insurance carriers to small independent adjusters. These include individuals providing insurance related services, firms such as agency and brokerage businesses, and insurance companies. Additionally, there may be business that require a license, but are not traditionally considered to be in the insurance business. Examples include car rental companies and travel agencies that offer insurance packages in connection with their primary business.

The Model Rule provides exceptions for certain licensees. For example, licensees with fewer than ten employees (including independent contractors) are exempt from the requirement to maintain an information security program. However, they remain subject to the other provisions in the Model Law, such as the requirement to provide notification in the case of certain cybersecurity events.

What are some of the requirements of the Model Law? Continue Reading Licensed by Your State’s Insurance Commissioner? Comprehensive Data Security Requirements Are Headed Your Way

On Thursday, New York Governor Andrew Cuomo signed into law the Stop Hacks and Improve Electronic Data Security Act (SHIELD Act), sponsored by Senator Kevin Thomas and Assemblymember Michael DenDekker. The SHIELD Act, which amends the State’s current data breach notification law, imposes more expansive data security and data breach notification requirements on companies, in the hope of  ensuring better protection for New York residents from data breaches of their private information. The SHIELD Act takes effect on March 21, 2020. Governor Cuomo also signed into law the Identity Theft Prevention and Mitigating Services Act that requires credit reporting agencies that face a breach involving Social Security numbers to provide five years of identity theft prevention and mitigation services to affected consumers. It also gives consumers the right to freeze their credit at no cost. This law becomes effective in 60 days.

Below are several FAQs highlighting key features of the SHIELD Act:

What is Private Information under the SHIELD Act?

Unlike other state data breach notification laws, New York’s original data breach notification law included definitions for “personal information” and “private information.” The current definition of “personal information” remains: “any information concerning a natural person which, because of name, number, personal mark, or other identifier, can be used to identify such natural person.” However, the SHIELD Act expands the definition of “private information” which sets forth the data elements that, if breached, could trigger a notification requirement. Under the amended law, “private information” means either:

  • personal information consisting of any information in combination with any one or more of the following data elements, when either the data element or the combination of personal information plus the data element is not encrypted, or is encrypted with an encryption key that has also been accessed or acquired:
    • social security number;
    • driver’s license number or non-driver identification card number;
    • account number, credit or debit card number, in combination with any required security code, access code, password or other information that would permit access to an individual’s financial account; account number, credit or debit card number, if circumstances exist wherein such number could be used to access an individual’s financial account without additional identifying information, security code, access code, or password; or
    • biometric information, meaning data generated by electronic measurements of an individual’s unique physical characteristics, such as a fingerprint, voice print, retina or iris image, or other unique physical representation or digital representation of biometric data which are used to authenticate or ascertain the individual’s identity; OR
  • a user name or e-mail address in combination with a password or security question and answer that would permit access to an online account.

It is worth mentioning that the SHIELD Act’s expansive definition of “private information” is still not as broad as the definition of the analogous term under the laws of other states. For example, California, Illinois, Oregon, and Rhode Island have expanded the applicable definitions in their laws to include not only medical information, but also certain health insurance identifiers.

How has the term “breach of security of the system” changed?

The SHIELD Act alters the definition of “breach of the security of the system” in two significant ways. First, it broadens the circumstances that qualify as a “breach” by including within the definition of that term incidents that involve “access” to private information, regardless of whether they resulted in “acquisition” of that information. Under the old law, access absent acquisition did not qualify as a breach. In connection with this change, the amendments also add several factors for determining whether there has been unauthorized access to private information, including “indications that the information was viewed, communicated with, used, or altered by a person without valid authorization or by an unauthorized person.”

Second, as discussed above, the expansion of the definition of private information effectively expands the situations which could result in a breach of the security of the system.  Notably, the SHIELD Act retains the “good faith employee” exception to the definition of “breach.”

Are there any substantial changes to data breach notification requirements? And who must comply?

Any person or business that owns or licenses computerized data which includes private information of New York residents must comply with breach notification requirements, regardless of whether the person or business conducts business in New York.

That said, there are several circumstances which would exempt a business from the breach notification requirements. For example, notice is not required if “exposure of private information” was an “inadvertent disclosure and the individual or business reasonably determines such exposure will not likely result in misuse of such information, or financial harm to the affected persons or emotional harm in the case of unknown disclosure of online credentials”. Further, businesses that are already regulated by and comply with data breach notice requirements under certain applicable state or federal cybersecurity laws (e.g., HIPAA, NY DFS Reg. 500, Gramm-Leach-Bliley Act) are not required to further notify affected New York residents, however, they are still required to notify the New York State Attorney General, the New York State Department of State Division of Consumer Protection, and the New York State Division of the State Police.

What are the “reasonable” data security requirements? And who must comply with them?

As with the notification requirements, the SHIELD Act requires that any person or business that owns or licenses computerized data which includes private information of a resident of New York must develop, implement and maintain reasonable safeguards to protect the security, confidentiality and integrity of the private information. Again, businesses in compliance with laws like HIPAA and the GLBA are considered in compliance with this section of the law. Small businesses are subject to the reasonable safeguards requirement, however safeguards may be “appropriate for the size and complexity of the small business, the nature and scope of the small business’s activities, and the sensitivity of the personal information the small business collects from or about consumers.” A small business is considered any business with fewer than fifty employees, less than $3 million in gross annual revenue in each of the last 3 years, or less than $5 million in year-end total assets.

The law provides examples of practices that are considered reasonable administrative, technical and physical safeguards. For example, risk assessments, employee training, selecting vendors capable of maintaining appropriate safeguards and implementing contractual obligations for those vendors, and disposal of private information within a reasonable time period, are all practices that qualify as reasonable safeguards under the law.

Are there penalties for failing to comply with the SHIELD Act?

The SHIELD Act does not authorize a private right of action, and in turn class action litigation is not available. Instead, the Attorney General may bring an action to enjoin violations of the law and obtain civil penalties. For data breach notification violations that are not reckless or knowing, the court may award damages for actual costs or losses incurred by a person entitled to notice, including consequential financial losses. For knowing and reckless violations, the court may impose penalties of the greater of $5,000 dollars or up to $20 per instance with a cap of $250,000. For reasonable safeguard requirement violations, the court may impose penalties of not more than $5,000 per violation.

Conclusion

The SHIELD Act has far reaching effects, as any business that holds private information of a New York resident – regardless of whether that organization does business in New York – is required to comply. “The SHIELD Act will put strong safeguards in place to curb data breaches and identity theft,” said Justin Brookman, Director of Privacy and Technology Policy for Consumer Reports. The SHIELD Act signifies how seriously New York, like other states across the nation, is taking privacy and data security matters.  Organizations, regardless of their location, should be assessing and reviewing their data breach prevention and response activities, building robust data protection programs, and investing in written information security programs (WISPs).

The California Consumer Privacy Act (CCPA), which goes into effect January 1, 2020, is considered the most robust state privacy law in the United States. The CCPA seems to have spurred a flood of similar legislative proposals on the state level, and it was only a matter of time before the Empire State introduced its own version of the law. The New York Privacy Act (NYPA), s5642, introduced last month by New York Senator Kevin Thomas, the Chair of the Consumer Protection Committee, is considered a more expansive version of its California counterpart.

Similar to the CCPA, the NYPA would provide consumers with greater control over their personal data, and impose substantial duties on businesses that control and process data, however the NYPA is distinct from the CCPA in significant ways. Below are several key features of the NYPA:

  • Application: Unlike the CCPA, which only applies to businesses with a threshold of $25 million annual revenue, the NYPA applies to “legal entities that conduct business in New York” or that produce products or services that “intentionally target” New York residents. This means that small-to-medium size businesses, and potentially even not-for-profit organizations will be subject to the law’s privacy and security obligations. Organizations exempted include state and local governments, and personal data that is regulated by HIPAA, HITECH, GLBA and notably, “data sets maintained for employment records purposes”.
  • Consumer Rights: The NYPA provides consumers a broad set of rights over their personal data. Consumer rights include: the right to access, the right to rectification, right to delete, right to stop processing and right to have data portability.   This extends the rights afforded to consumers by the CCPA, as the CCPA does not include a right to rectification.
  • Privacy and Security Obligations: Under the NYPA, covered businesses would be required to “exercise the duty of care, loyalty and confidentiality . . . with respect to securing the personal data of a consumer against a privacy risk; and shall act in the best interests of the consumer, without regard to the interests of the entity, . . . in a manner expected by a reasonable consumer under the circumstances.” In addition businesses are required to “reasonably secure personal data from unauthorized access” and “promptly” notify consumers of a breach. Finally, the law prevents businesses from using personal data in a way that “(i) benefits an online service provider to the detriment of an end user; (ii) would result in reasonably foreseeable physical or financial harm to a consumer; or (iii) would be unexpected and “highly offensive” to a “reasonable consumer.”
  • Enforcement: The New York State Attorney General may bring an action in the name of the state, or on behalf of residents of the state, however a private right of action is also available to any person injured by reason of violation of the law. If passed, this enforcement provision would likely create an influx of litigation. A similar cause of action exists under an Illinois privacy law that you might have heard about, the Illinois Biometric Information Privacy Act or “BIPA.” That provision has resulted in flood of litigation, including putative class actions, seeking to recover statutory damages for plaintiffs who allege their biometric information has been collected and/or disclosed in violation of the statute. This is arguably the most significant difference between the CCPA. Despite several attempts to expand the private right of action, in its current form the CCPA only allows for a private right of action in very limited circumstances, if a nonencrypted or nonredacted personal information is subject to an unauthorized access, exfiltration, theft, or disclosure because the covered business did not meet its duty to implement and maintain reasonable safeguards to protect that information.

The NYPA is still in the very early stages of the legislative process – it has only been reviewed by the Senate’s Consumer Protection Committee, and is still looking for a co-sponsor from the state Assembly. Nonetheless, such an aggressive bill signifies the seriousness in which New York is considering privacy and security matters.  Organizations, regardless of their location, should be assessing and reviewing their data collection activities, building robust data protection programs, and investing in written information security programs (WISPs).

 

The GDPR is wrapping up its first year and moving full steam ahead. This principles-based regulation has had a global impact on organizations as well as individuals. While there continue to be many questions about its application and scope, anticipated European Data Protection Board guidance and Data Protection Authority enforcement activity should provide further clarity in the upcoming year. In the meantime, here are a few frequently asked questions – some reminders of key principles under the GDPR and others addressing challenges for implementation and what lies ahead.

Can US organizations be subject to the jurisdiction of the GDPR?

Whether a US organization is subject to the GDPR is a fact-based determination. Jurisdiction may apply where the US organization has human or technical resources located in the EU and processes EU personal data in the context of activities performed by those resources. In cases where the US organization does not have human or technical resources located in the EU, it may be subject to the GDPR’s jurisdiction in two instances: if the organization targets individuals in the EU (not businesses) by offering goods or services to them, regardless of whether payment is required, or if it monitors the behavior of individuals in the EU and uses that personal data for purposes such as profiling (e.g. website cookies, wearable devices). The GDPR may also apply indirectly to a US organization through a data processing agreement.

If we execute a data processing agreement, does that make our US organization subject to the GDPR?

When an organization subject to the GDPR engages a third party to process its EU data, the GDPR requires that the organization impose contractual obligations on the third party to implement certain GDPR-based safeguards. If you are not otherwise subject to the GDPR, executing a data processing agreement will not directly subject you to the GDPR. Instead, it will contractually obligate you to follow a limited, specific set of GDPR-based provisions. Your GDPR-based obligations will be indirect in that they are contractual in nature.

Does the GDPR apply only to the data of EU citizens?

No, the GDPR applies to the processing of the personal data of data subjects who are in the EU regardless of their nationality or residence.

Is our organization subject to the GDPR if EU individuals access our website and make purchases?

If your organization does not have human or technical resources in the EU, the mere accessibility of your website to EU visitors, alone, will not subject you to the GDPR. However, if your website is designed to target EU individuals (e.g. through features such as translation to local language, currency converters, local contact information, references to EU purchasers, or other accommodations for EU individuals) your activities may be viewed as targeting individuals in the EU and subject you to the GDPR.

Are we required to delete an individual’s personal data if they request it?

If your organization is subject to the GDPR, an individual may request that you delete their personal data. However, this is not an absolute right. Your organization is not required to delete the individual’s personal data if it is necessary

  • for compliance with a legal obligation or the establishment, exercise or defense of a legal claim
  • for reasons of public interest (e.g. public health, scientific, statistical or historical research purposes)
  • to exercise the right of freedom of expression or information
  • where there is a legal obligation to keep the data
  • or where you have anonymized the data.

Additional consideration should be given to any response when the individual’s data is also contained in your back-ups.

GDPR principles have started to influence law in the U.S. In fact, many have been watching developments regarding the California Consumer Privacy Act (CCPA), which shares a right to delete as it pertains to the personal information of a California resident. Similar to the GDPR, it is not an absolute right and in certain cases an exception may apply. For instances, both law contain an exception from the right to have personal information deleted when the information is needed to comply with certain laws.

Does the GDPR apply to an EU citizen who works in the US?

If your organization is not subject to the GDPR and you hire an EU citizen to work in the US, the GDPR may not apply to the processing of their personal data in the US. However, depending on the circumstances, the answer may be different if the EU citizen is in the US on temporary assignment from an EU parent. In that scenario, their data may be subject to the GDPR if the US entity’s relationship with the parent creates an establishment in the EU, and it processes this data in the context of the activities of that establishment. To the extent the EU parent transfers the EU employee’s personal data from the EU to the US entity, that transfer may require EU-US Privacy Shield certification, the execution of binding corporate rules, or standard contractual clauses. These measures are designed to ensure data is protected when it is transferred to a country, such as the US, that is not deemed to have reasonable safeguards.

Do we need to obtain an EU individual’s consent every time we collect their personal data?

If your organization is subject to the GDPR and processes an EU individual’s information, you must have a “legal basis” to do so. Consent is just one legal basis. In addition to consent, two of the most commonly used legal basis are the “legitimate interests” of your organization and the performance of a contract with the individual. A legitimate interest is a business or operational need that is not outweighed by the individual’s rights (e.g. processing personal data for website security, conducting background checks, or coordinating travel arrangements). Processing necessary to the performance of a contract is activity that enables you to perform a contract entered into with the individual (e.g. processing employee data for payroll pursuant to the employment contract or processing consumer data for shipping goods under a purchase order.)

Should we obtain an employee’s consent to process their personal data?

Continue Reading The GDPR – One Year and Counting