The passage of Prop 24, the California Privacy Rights Act of 2020 (“CPRA”), has caused a bit of confusion among businesses in California.  The confusion stems from the fact that the CPRA has an effective date of January 1, 2023, amending the existing California Consumer Privacy Act (CCPA) when it takes effect, but also immediately extending the current limited exemptions under the CCPA for employment-related data, also to January 1, 2023. (Without the CPRA, the limited exemptions would have already expired.)_ It appears that this labyrinth of amendments, extensions, and exemptions has misled some businesses subject to CCPA (the rules for which will change a little under the CPRA) into believing that they are completely exempt from privacy obligations until 2023 with respect to job applicants, employees, owners, directors, officers, medical staff, and contractors (collectively “employees and applicants”).  This is not the case!  In short, businesses have existing obligations under the CCPA concerning the personal information of their employees and applicants, which became effective on January 1, 2020.

To understand the current employment-related obligations of businesses in California, a brief history lesson is needed.  The CCPA was signed into law in 2018 by then Governor Jerry Brown.  Immediately, it became clear that there were major problems with the law, including, but not limited to, the definition of “consumer” (the second C in CCPA), which is defined to be any resident of California.  Lawmakers recognized the potential issues that would come from granting employment-related data subjects (i.e., job applicants, employees, independent contractors) all the rights a traditional consumer would have under the CCPA.  Thus, the California State Assembly introduced AB25, which originally tried to completely exempt business from having to comply with the CCPA for employees and applicants.

Unfortunately for employers, AB25 was amended in the State Senate and the version that was eventually passed and signed into law by Governor Gavin Newsom in October 2019 (just weeks before the CCPA became effective) exempted businesses in their role as employers from most but not all of the CCPA’s requirements with respect to employment-related data (i.e., limited exemptions mentioned above).

Under the CCPA (as amended by AB25), employers have the following current obligations:

  • Provide notices to employment-related data subjects (job applicants, employees, owners, directors, officers, medical staff, and contractors) of the categories of personal information being collected and the purposes for which the personal information will be used
  • Implement “reasonable security” over certain categories of personal information to avoid a private right of action following a data breach. To this end, it may be prudent to review and augment vendor contracts to ensure that employment-related personal information is handled properly.

Companies should continue to monitor CCPA/CPRA developments, and ensure their privacy programs and procedures remain aligned with current compliance requirements.

 

 

On May 12, 2021, the Biden Administration issued an Executive Order on “Improving the Nation’s Cybersecurity” (EO). The EO was in the works prior to the Colonial Pipeline cyberattack, reportedly a ransomware incident that snarled the flow of gas on the east coast for days. Ransomware attacks are nothing new, but they are increasing in severity. Most do not see the large sums paid to hackers by victim organizations needing access to their encrypted data or wanting to stop a disclosure of sensitive information if they can. But most do see the crippling of vital infrastructure caused by compromised computer systems without which basic services cease to flow.

Of course, the Colonial Pipeline incident is not the only attack we have seen affecting entities that provide to critical infrastructure. In February of this year, ABC News reported that weak cybersecurity controls “allowed hackers to access a Florida wastewater treatment plant’s computer system and momentarily tamper with the water supply,” based on a memo by federal investigators obtained by ABC. A month later, sensitive data were exposed for some time in cloud storage by New England’s largest energy provider, according to reports. The SolarWinds breach last year, named Sunburst, was a massive compromise of government agencies including the Department of Energy.

Will the EO help? It is unclear at this point, however, the EO makes a clear statement on the policy of the Administration:

It is the policy of my Administration that the prevention, detection, assessment, and remediation of cyber incidents is a top priority and essential to national and economic security.  The Federal Government must lead by example.  All Federal Information Systems should meet or exceed the standards and requirements for cybersecurity set forth in and issued pursuant to this order.

The effect of the EO will mostly affect the federal government and its agencies. However, several of the requirements in the EO will reach certain federal contractors, and also will influence the private sector. Below are several of the items directed by the EO:

  • Removing contractual barriers in contracts between the federal government and its information technology (IT) and operational technology (OT) service providers. The goal here is to increase information sharing about threats, incidents, and risks in order to accelerate incident deterrence, prevention, and response efforts and to enable more effective defense of government systems and information. As part of this effort, the EO requires a review of the Federal Acquisition Regulation (FAR) concerning contracts with such providers and recommendations for language designed to achieve these goals. Recommendations will include, for example, time periods contractors must report cyber incidents based on severity, with reporting on the most severe cyber incidents not to exceed 3 days after initial detection. The changes also will seek to standardize common cybersecurity contractual requirements across agencies.
  • Modernize approach to cybersecurity. To achieve this goal, some of the steps called for in the EO include adopting security best practices, advance to Zero Trust Architecture, move to secure cloud services, including Software as a Service (SaaS), and centralize and streamline access to cybersecurity data to drive analytics for identifying and managing cybersecurity risks. More specifically, the EO requires that within 180 days of the date of the EO, agencies must adopt multi-factor authentication and encryption for data at rest and in transit, to the maximum extent consistent with Federal records laws and other applicable laws.
  • Improve software supply chain security. Driven by the impact of the SolarWinds incident, the EO points to the lack of transparency in the software development and whether adequate controls exist to prevent tampering by malicious actors, among other things. The EO calls for guidance to be developed that will strengthen this supply chain, which will include standards, procedures, and criteria, such as securing development environments and attesting to conformity with secure software development practices. The EO also requires recommendations for contract language that would require suppliers of software available for purchase by agencies to comply with, and attest to complying with the guidance developed. Efforts also will be made to reach the private sector. For instance, pilot programs will be initiated by the Secretary of Commerce acting through the Director of NIST to educate the public on the security capabilities of Internet-of-Things (IoT) devices and software development practices, and shall consider ways to incentivize manufacturers and developers to participate in these programs.
  • Establishing a Cyber Safety Review Board. Among the Board’s duties would include reviewing and assessing certain significant cyber incidents affecting FCEB Information Systems or non-Federal systems, threat activity, vulnerabilities, mitigation activities, and agency responses.
  • Standardize incident response. Standardize the federal government’s response to cybersecurity vulnerabilities and incidents to ensure a more coordinated and centralized cataloging of incidents and tracking of agencies’ progress toward successful responses.
  • Improve detection. The EO seeks to improve detection of cybersecurity vulnerabilities and incidents on federal government networks.
  • Improving the federal government’s investigative and remediation capabilities. The Administration recognizes it is essential that agencies and their IT service providers collect and maintain network and system logs on federal information systems in order to address a cyber incident. The EO seeks recommendations on the types of logs to be maintained, the time periods to retain the logs and other relevant data, the time periods for agencies to enable recommended logging and security requirements, and how to protect logs. These recommendations will also be considered by the FAR Council when promulgating rules for removing barriers to sharing threat information.

It is expected the U.S. government will ramp up efforts to strengthen its cybersecurity, and we can expect states to continue to legislate and regulate in this area. All businesses, including federal contractors, likely will experience pressure to evaluate their data privacy and security threats and vulnerabilities and adopt measures to address their risk and improve compliance.

As we noted in our last post, there has been a flurry of data privacy and security activity in New York, with the State appearing poised to join California as a leader in this space.  Most recently, on April 29, 2021, the New York City Council passed the Tenant Data Privacy Act (“TDPA”), which would impose on owners of “smart access” buildings obligations related to their collection, use, safeguarding, and retention of tenant data.

Under the TDPA, a “smart access” building is one that uses electronic or computerized technology (e.g., a key fob), radio frequency identification cards, mobile phone applications, biometric information (e.g., fingerprints, voiceprints, hand or face geometry), or other digital technology to grant entry to the building, or to common areas or individual dwelling units therein.  The TDPA would require owners of smart access buildings to develop and maintain policies and procedures to address the following requirements:

  1. Express Consent. Before collecting “reference data” from a tenant for use in connection with the building’s smart access system, the building owner would be required to obtain the tenant’s express consent “in writing or through a mobile application.”  “Reference data” is the data used by the system to verify that the individual seeking access is authorized to enter.  Even after obtaining consent, the owner would only be permitted to collect the minimum amount of data necessary to enable the smart access system to function effectively.
  2. Privacy Policy. Building owners would also need to provide a “plain language” privacy policy to its tenants that includes certain disclosures, including disclosure of the data elements that the system collects, the third parties that data is shared with, how the data is protected, and how long it will be retained.
  3. Stringent Security Safeguards. Additionally, the TDPA would require building owners to implement robust security measures and safeguards to protect the data of its tenants, guests, and other users of the smart access system.  At a minimum, these security measures would need to include data encryption, a password reset capability (if the system uses a password), and regularly updated firmware to address security vulnerabilities.
  4. Data Destruction. With limited exceptions, building owners would need to destroy any “authentication data” collected through their smart access systems no later than 90 days after collection.  “Authentication data” is the data collected from the user at the point of authentication, excluding any data generated through or collected by a video or camera system used to monitor entrances, but not to grant entry.

The TDPA would impose strict limits on the categories of tenant data that building owners would be permitted to collect, generate, or utilize through their smart access systems.  Specifically, they would only be permitted to collect:

  • the user’s name;
  • the dwelling unit number and that of other doors or common areas to which the user has access;
  • the user’s preferred method of contact;
  • the user’s biometric identifier information (if the smart access system utilizes such information);
  • the identification card number or any identifier associated with the physical hardware used to facilitate building entry (e.g., Bluetooth);
  • passwords, passcodes, usernames and contact information used singly or in conjunction with other reference data to grant the user access;
  • lease information, including move-in and, if available, move-out dates; and
  • the time and method of access (but solely for security purposes).

Building owners would also be prohibited, subject to certain exceptions, from selling, leasing, or otherwise disclosing tenant data to any third parties.  Building owners that wish to engage third-party vendors to operate or facilitate use of their smart access systems would be required to first (a) provide to users the name of the vendor, the intended use of user data by the vendor, and a copy of the vendor’s privacy, and (b) obtain the users’ express written authorization to disclose the users’ data to the vendor.

Significantly, the TDPA would also create a private right of action for tenants whose data is unlawfully sold.  Such tenants would be empowered to seek either compensatory damages or statutory damages ranging from $200 to $1,000 per tenant, along with attorneys’ fees.

Unless vetoed by the City’s Mayor, the TDPA will take effect at the end of June 2021, though building owners will be granted a grace period until January 1, 2023, to develop their compliance programs and replace or upgrade their smart access systems.  Building owners should use that time wisely, as the TDPA’s requirements will, in many instances, be a heavy lift.

Effective July 9, 2021, certain retail and hospitality businesses that collect and use “biometric identifier information” from customers will need to post conspicuous notices near all customer entrances to their facilities.  These businesses will also be barred from selling, leasing, trading, sharing or otherwise profiting from the biometric identifier information they collect from customers.  Customers will have a private right of action to remedy violations, subject to a 30-day notice and cure period, with damages ranging from $500 to $5,000 per violation, along with attorneys’ fees.

These new requirements, which are set forth in an amendment to Title 22 of the NYC Admin. Code (the “Amendment”), apply to “commercial establishments,” a three-pronged category that includes:

  1. Food and drink establishments: Establishments that give or offer for sale to the public food or beverages for consumption or use on or off the premises, or on or off a pushcart, stand or vehicle.
  2. Places of entertainment: Privately or publicly owned and operated entertainment facilities, such as a theaters, stadiums, arenas, racetracks, museums, amusement parks, observatories, or other places where attractions, performances, concerts, exhibits, athletic games or contests are held.
  3. Retail stores: Establishments wherein consumer commodities are sold, displayed or offered for sale, or where services are provided to consumers at retail.

The Amendment broadly defines “biometric identifier information” as a physiological or biological characteristic used to identify an individual including, but not limited to: (i) a retina or iris scan, (ii) a fingerprint or voiceprint, (iii) a scan of hand or face geometry, or any other identifying characteristic.

The Amendment will take effect amidst a flurry of data privacy and security activity in New York.

  • Last year, the New York Department of Financial Services (“DFS”) filed its first enforcement action under New York’s Cybersecurity Requirements for Financial Services Companies, 23 N.Y.C.R.R. Part 500 (“Reg 500”). DFS also announced a $1.5 million settlement with a residential mortgage services provider earlier this year.
  • In another recent development, the Stop Hacks and Improve Electronic Data Security Act (“SHIELD Act”), which took effect in March 2020, requires organizations that own or license private information related to New York residents to, among other things, develop, implement, and maintain reasonable safeguards to protect that information, which includes biometric information.
  • Building on the momentum from Reg 500 and the SHIELD Act, several additional privacy bills are currently under consideration:
  • One is the Biometric Privacy Act, which, if enacted could make New York the next hotbed of class action litigation over biometric privacy.
  • Another is the Tenant Privacy Act, which, among other things, would require owners of “smart access” buildings – i.e., those that use key fobs, mobile apps, biometric identifiers, or other digital technologies to grant access to their buildings – to provide privacy policies to their tenants prior to collecting certain types of data from them, as well as to strictly limit (a) the categories and scope of data that the building owner collects from tenants, (b) how it uses that data (including a prohibition on data sales), and (c) how long it retains the data.
  • Additionally, New York is considering two bills – S567 and A680 – which would grant consumers sweeping privacy rights, comparable to those available under the CCPA in California and CDPA in Virginia.

Jackson Lewis’ Privacy, Data & Cybersecurity Group has been closely monitoring these fast-moving developments and is available to assist organizations with their compliance and risk mitigation efforts.

As access to COVID-19 vaccines becomes more prevalent, and we begin to conceptualize what a post-pandemic world might look like, many governments are assessing the idea of a COVID-19 vaccine passport framework.  In late March, the European Commission announced its plan for a COVID-19 Digital Green Certificate framework (“the framework”) to facilitate “safe free movement of citizens within the EU during the COVID-19 pandemic”. The Digital Green Certificate provides proof that an individual has either: 1) been vaccinated against COVID-19, 2) received a negative test result or 3) recovered from COVID-19.  But while the benefits to such a plan are clear, there are significant privacy and security issues to consider.

Shortly after the European Commission released the proposal of the framework, the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS) issued a joint opinion on the framework in respect to personal data protection implications (“the joint opinion”).  The joint opinion addressed the personal data implications of the framework, and highlighted, above all, that such a framework must be consistent and not conflict with application of the General Data Protection Regulations (“GDPR”), and that there should be adoption of adequate technical and organizational privacy and security measures in the context of the framework.

Below are key recommendations from the joint opinion:

  • Categories of Personal Data. While Annex I of the framework sets out categories and data fields of personal data that would be processed under the framework, the joint opinion emphasizes that the “justification for the need for such data fields” should also be included in the framework, as well as developing “more detailed data fields (sub-categories of data)…under the already defined categories of data should be added”. These revisions will help ensure that the framework is consistent with several GDRP principles including data minimization (i.e. not processing more than the data necessary to fulfil the purpose for which the data was collected) , purpose limitations (personal data shall only be collected for a specified, explicit and legitimate purpose) , and impact assessment (the obligation under the GDPR which requires controllers to conduct a data protection impact assessment before processing personal data would have to be redone if data fields were altered).
  • Adoption of Adequate Technical and Organizational Privacy and Security Measures in the Context of the Proposal. The joint opinion highlights that the framework should explicitly state that controllers and processors of personal data “shall take adequate technical and organizational measures to ensure a level of security appropriate to the risk of processing, in line with Article 32 GDPR”.  Also included, the joint opinion suggests “the establishment of processes for a regular testing, assessment and evaluation of the effectiveness of the privacy and security measures adopted”, as well as including language in the framework consistent with the GDPR to prevent confusion and ensure relevance.  Finally, the joint opinion notes that adoption of privacy and security measures should be taken both at the time of the determination of the means for processing, as well as by the time of the processing itself.
  • Identification of controllers and processors. The joint opinion recommends that the framework specify “the list of all entities foreseen to be acting as controllers, processors and recipients of the data in that Member State”. Identifying these entities will provide EU citizens with an understanding of “whom they may turn to for the exercise of their data protection rights under the GDPR, including in particular the right to receive transparent information on the ways in which data subject’s rights may be exercised with respect tot the processing of personal data”.
  • Transparency and data subject’s rights. The personal data related to the framework is particularly sensitive.  As a result, the joint opinion urges the European Commission to “ensure that the transparency of the processes are clearly outlined for citizens to able to exercise their data protection rights”.
  • Data storage. The joint opinion notes that to ensure GDPR principles surrounding data storage principles (e.g. storing data no longer than is necessary for the purposes for which it was processed) in the context of the framework, where possible, the framework should “explicitly define” and if not possible, then at least provide the “specific criteria used to determine such storage period”.
  • International data transfers. Finally, the joint opinion recommends “explicitly clarifying whether and when any international transfers of data are expected” as well as including safeguards “to ensure that third countries will only process the personal data exchanged for the purposes specified” within the framework.

The EU is not the only region implementing or considering a vaccine passport program.  Israel’s vaccine passport, the Green Pass, is already up and running (available to the 80% of the adult  population that is fully vaccinated), and several private companies are trying to develop globalized vaccine passport programs.  For example, one large tech company’s vaccine passport technology is being tested by the State of New York, for some sports venues and arenas.  Likewise, another technology, the Common Pass  if implemented will help individuals when travelling globally to demonstrate their COVID-19 status. It is worth noting however, that some states are actively banning vaccine passport technology and requirements.  For example, just last week in Florida, Governor Ron DeSantis signed into law legislation prohibiting businesses, schools and government offices from requiring proof of vaccination, with fines of up to $5000. And in general, public support of vaccine passports in the U.S. seems to vary by activity. According to a recent Gallup poll the majority of Americans support proof of vaccination for travel by airplanes and attending events with large crowds. Conversely, Americans are less supportive of proof of vaccination at work, staying in a hotel or dining at a restaurant.

Whatever the program, the privacy and security considerations surrounding the collection of personal data are similar, and become increasingly complicated in the context of a global vaccine program where overlapping, and sometimes conflicting, data privacy and security laws and guidance come into play.   In the U.S. alone, there are numerous laws which may be implicated when vaccine related data is collected from individuals in the public or private setting – such as for employees or customers.  These include the Americans with Disabilities Act (ADA), the Genetic Information Nondiscrimination Act (GINA), state laws, and the CCPA.  In addition to statutory or regulatory mandates, organizations will also need to consider existing contracts or services agreements which may provide for or limit the collection, sharing, storage, or return of data. Moreover, if a vendor were involved in a vaccine passport program, contracts/agreements would need to include confidentiality, data security, and similar provisions. This is most important if the vendor will be maintaining, storing, accessing, or utilizing the information collected about the organization’s employees or customers.

In short, a vaccine passport program may play a crucial role in ensuring a safe and healthy return to normalcy across the globe.  Nevertheless, the legal risks, challenges, and requirements of any such program, whether in the public and private forum, must be considered prior to implementation.

The SolarWinds hack highlights the critical need for organizations of all sizes to include cyber supply chain risk management as part of their information security program. It is also a reminder that privacy and security risks to an organization’s data can come from various vectors, including third party vendors and services providers. By way of example, the Pennsylvania Department of Health recently announced a data security incident involving a third-party vendor engaged to provide COVID-19 contact tracing. The personal information of Pennsylvania residents was potentially compromised when the vendor’s employees used an unauthorized collaboration channel.

Protecting against these risks requires maintaining and implementing a third-party vendor management policy, a critical and often overlooked part of an organization’s information security program.  Appropriate vendor management helps guard against threats to an organization’s data posed by authorized third parties who have direct or indirect access. Risks can include data breaches, unauthorized use or disclosure, and corruption or loss of data. These risks may come from vendors who provide cloud storage, SaaS, payroll processing or HR services, services using connected devices, IT services, or even records disposal.

Robust vendor management policies and practices typically involve three components: conducting due diligence to ensure the third party vendor or service provider with whom the organization shares personal information or to whom it discloses or provides access, implements reasonable and appropriate safeguards to ensure the privacy and security of that data; contractually obligating the third party vendor or service provider to implement such safeguards; and monitoring the third party vendor or service provider to ensure compliance with these contracted provisions.

While vendor management is a best practice, it is also required by certain U.S. federal laws including the Gramm-Leach-Bliley Act and HIPAAstate laws in Massachusetts, Illinois and California, and municipal laws such as the New York Department of Financial Services Cybersecurity Rules (NYCRR 500). In the EU, the European Data Protection Regulation (GDPR) specifically requires a data controller to only use processors (e.g., third party service providers) who provide sufficient written guarantees to implement appropriate technical and organizational measures that ensure the privacy and security of the controller’s personal data.

Aside from mandated vendor management practices, over twenty states including Florida, Texas, Massachusetts, New York, Illinois have laws requiring businesses that collect and maintain personal information to implement reasonable safeguards to protect that data. These states have been joined by the recently enacted California Privacy Protection Act (CPRA) and Virginia Consumer Data Protection Act (CDPA).  Although the majority of these statutes do not define reasonable safeguards, similar to data retention and storage limitations practices, vendor management practices may constitute a “reasonable safeguard.”

The Federal Trade Commission (FTC) took such a position in a Consent Agreement resolving alleged violations of the Gramm-Leach-Bliley Act (GLBA) Safeguards Rule. In its complaint, the FTC alleged several violations including a failure to take reasonable steps to select service providers capable of maintaining appropriate safeguards for personal information provided by the company and a failure to require service providers by contract to implement appropriate safeguards for such personal information. The Consent Agreement required the company to establish, implement, and maintain a comprehensive data security program that protects the security of certain covered information (i.e., reasonable safeguards). This requirement specifically includes selecting and retaining vendors capable of safeguarding company personal information they access through or receive from the company, and contractually requiring vendors to implement and maintain safeguards for such information.

Over recent months, companies have faced heightened risks to their information security from threat actors, increased remote work arrangements, and outsourced activities involving sensitive data. These threats, combined with a proliferation of proposed and enacted data protection laws, underscore the importance of implementing, maintaining, and monitoring a robust vendor management program.

 

 

 

The California Privacy Protection Act (CPRA) amended the California Consumer Privacy Act (CCPA) and has an operative date of January 1, 2023. The CPRA introduces new compliance obligations including a requirement that businesses conduct risk assessments. While many U.S. companies currently conduct risk assessments for compliance with state “reasonable safeguards” statutes (e.g., Florida, Texas, Illinois, Massachusetts, New York) or the HIPAA Security Rule, the CPRA risk assessment has a different focus. This risk assessment requirement is similar to the EU General Data Protection’s (GDPR) data protection impact assessment (DPIA).

The goal of conducting a CPRA risk assessment is to restrict or prohibit the processing of personal information where the risks to a consumer’s privacy outweigh any benefits to the consumer, business, stakeholders, and public. Notably, the CPRA does not limit risk assessments to activities involving the processing of sensitive data. In addition to conducting the actual risk assessment, this process will require a preliminary determination of which data processing activities may present a significant risk to privacy rights. The business must document these risk assessments for submission to the California Privacy Protection Agency on a regular basis.

Under the CPRA, the documented risk assessment shall:

  • include whether the processing involves consumers’ sensitive personal information (e.g., social security, driver’s license, state identification card, or passport number; account log-in, financial account, debit card, or credit card number in combination with security or access code, password, or credentials for account; precise geolocation; racial or ethnic origin, religious or philosophical beliefs, or union membership; contents of mail, email, and text messages unless the business is the intended recipient of the communication; genetic data; biometric information processed for the purpose of uniquely identifying a consumer; information related to health, sex life or orientation); and
  • identify and weigh the benefits to the business, consumer, other stakeholders, and the public from the processing against the potential risks to the rights of the consumer whose data is being processed.

The CPRA directs the California Attorney General and California Privacy Protection Agency to issue implementing regulations, including regulations related to risk assessments. These regulations must be adopted by July 1, 2022 and will likely provide further guidance on the scope of and process for conducting and documenting risk assessments.

Complying with the CPRA will require expanded data mapping and advance planning, some of which may occur prior to issuance of the implementing regulations. During this time, businesses may find the GDPR instructive, particularly since the CCPA and CPRA borrow liberally from the regulation.

Under the GDPR and related guidelines, a DPIA is required or recommended where data processing is likely to result in a high risk to the privacy rights of individuals. This includes activities that

  • use automated processing, including profiling, to evaluate an individual’s personal aspects and on which decisions are based that produce significant effects
  • include large scale processing of sensitive data
  • process data on a large scale
  • match or combine datasets
  • process data of vulnerable individuals (e.g., children)
  • innovate or use new technologies

The DPIA must document and include

  • a description of the processing operations
  • the purposes of the processing
  • the legitimate interest pursued by the business, where applicable
  • an assessment of the necessity and proportionality of the processing activity in relation to the purposes
  • an assessment of the risks to the individual’s privacy rights
  • measures designed to address the risks, including safeguards, security measures and mechanisms to ensure the protection of personal data

The CCPA and CPRA currently exclude employee personal information from certain provisions (e.g., the right to opt out, right to delete). This carve-out exempts employee personal information from the risk assessment requirement outlined above; however, the carve-out is due to expire on January 1, 2023. As businesses begin developing their risk assessment programs, they will want to monitor whether this exclusion for employee information will be extended and/or amended and how it might impact the risk assessment process.

As noted above, the operative date of the CPRA is January 1, 2023. Implementing regulations must be adopted by July 1, 2022 and civil and administrative enforcement activity can commence on July 1, 2023.

For additional information on the CPRA, please reach out to a member of our Privacy, Data and Cybersecurity practice group or check out our CPRA blog series.

In a recent post, we highlighted the need for a privacy and cybersecurity training program, one not solely focused on spotting phishing attempts (although that is quite important as well). A primary reason, quite simply, is that employees continue to be a leading cause of data breaches. This fact was reaffirmed for the Wyoming Department of Health (WDOH) when an employee mistake resulted in the disclosure of nearly 165,000 Wyomingites. And, the risk is only amplified in the current remote work environment.

The WDOH announced on April 27, 2021, that it had inadvertently exposed 53 files containing COVID-19 and Influenza test data and 1 file containing breath alcohol test results. Some of the files had been exposed as early as November 5, 2020, but WDOH did not discover the incident until March 10, 2021. According to WDOH, the files included name or patient ID, address, date of birth, test result(s), and date(s) of service, but did not contain social security numbers, banking, financial, or health insurance information.

The breach resulted from an “inadvertent exposure” of the files by a WDOH workforce employee who mistakenly and impermissibly uploaded the files to private and public GitHub.com repositories, resulting in disclosure to unauthorized individuals. Notably, WDOH intended GitHub.com, internet-based software development company, be used by its employees only for software code storage and maintenance.

It is not clear why the WDOH employee uploaded 54 files containing patient test result data, including COVID-19 test results, to a service intended for storage of coding data. And, we do not know whether the employee in this case received training on the purpose and use of GitHub.com. However, according to WDOH’s announcement, the files were promptly removed from GitHub.com, the employee was sanctioned, and WDOH retrained its workforce on data privacy and security best practices.

Certainly, mistakes processing personal information are going to happen and no amount of training will prevent all data incidents and breaches. There is no silver bullet. An important question for an organization to ask, however, is whether reasonable steps are being taken to minimize the risk to data, even with regard to inadvertent errors in handling and with regard to use of company systems, among other things.

Training can be one of a number of tools organizations use to create a culture of privacy and security. Increased awareness can help to minimize, even if not eliminate, inadvertent errors. The white paper we provided in our earlier post outlines several considerations for developing a robust program designed to continually remind employee of the vigilance needed to protect personal information from unauthorized access, acquisition, modification, and disclosure. It is and will continue to be an ongoing challenge, particularly in the current environment with workplaces shifting as we emerge from the harshest effects of the pandemic.

Will Florida be the next state to enact a comprehensive consumer privacy law? It sure is starting to look like a viable possibility.  With the California Consumer Privacy Act (“CCPA”) in full effect, and the recent enactment of Virginia’s Consumer Data Protection Act (“CDPA”), there has been a flurry of state privacy legislative proposals since the start of 2021, with Florida leading the way.  Backed by Governor Ron DeSantis,   Florida House Bill 969 (HB 969) would create new obligations for covered businesses and greatly expand consumers’ rights concerning their personal information, such as a right to notice about a business’s data collection and selling practices.

Florida’s HB 969 was originally introduced in February (a full overview of the initial bill is available here), and has continued to move swiftly through the legislative process. On April 21, the a slightly revised version of the bill passed the Florida House of Representatives by a 118 – 1 vote, expanding the scope of the private cause of action, changing the effective date and modifying the scope of companies subject to the law.

Here are the key changes made to HB 969 since originally introduced:

Significantly, and similar to the California Consumer Privacy Act (CCPA), HB 969 would establish a private cause of action for consumers affected by a data breach involving certain personal information when reasonable safeguards were not in place to protect that information. More expansive than the CCPA, however, a private cause of action would now also be available to consumers for a company’s failure to comply with deletion, opt-out and correction requests.  Conversely, Virginia’s CDPA lacks a private cause of action in its entirety, and the state’s attorney general has exclusive enforcement authority.

Second, if passed, HB 969 would go into effect on July 1, 2022 – instead of the originally proposed January 1, 2022.  And finally, initially, HB 969 stated that the law would apply to for profit businesses that conduct business in Florida, collect personal information about consumers, and satisfy at least one of the following threshold requirements:

  1. The business has global annual gross revenues over $25 million (adjusted to reflect any increase in the consumer price index); or
  2. The business annually buys, receives for the business’s commercial purposes, sells, or shares for commercial purposes the personal information of at least 50,000 consumers, households, or devices; or
  3. The business derives at least half of its global annual revenues from selling or sharing personal information about consumers.

Instead, HB 969 now stipulates that the law would only apply to for profit businesses that satisfy at least two the above threshold requirements.  In addition, the revised bill increased the annual gross revenues threshold from over $25 million to over $50 million.

Florida seems to be leading the way as the next state  poised to enact a consumer privacy law, but it is not alone.  The International Association of Privacy Professionals (IAPP) has observed, “State-Level momentum for comprehensive privacy bills is at an all-time high.” The IAPP maintains a map of state consumer privacy legislative activity, with in-depth analysis comparing key provisions. There are currently at least 14 states with consumer privacy bills undergoing the legislative process, and several other states where bills were introduced but died in committee or were postponed.  One key state to keep an eye on is Washington. For three consecutive years, the Washington state legislature has introduced versions of the WPA. In 2019, the bill failed in the Assembly. In 2020, the Assembly passed an amended version of the bill, but the two chambers failed to reach a compromise regarding enforcement provisions. Currently in cross committee, the WPA would impose GDPR-like requirements on businesses that collect personal information related to Washington residents. In addition to requirements for notice and consumer rights such as access, deletion, and rectification, the WPA would impose restrictions on use of automatic profiling and facial recognition.

States across the country are contemplating ways to enhance their data privacy and security protections. Organizations, regardless of their location, should be assessing and reviewing their data collection activities, building robust data protection programs, and investing in written information security programs.

Increased remote work due to the COVID-19 pandemic has only exacerbated privacy and cybersecurity concerns, and likely has not changed the finding in Experian’s 2015 Second Annual Data Breach Industry Forecast:

Employees and negligence are the leading cause of security incidents but remain the least reported issue.

A more recent state of the industry report by Shred-It, an information security company, found that 47 percent of business leaders said employee error such as accidental loss of a device or document by an employee had caused a data breach at their organization. Moreover, CybSafe, a behavioral security platform, analyzed data from the UK Information Commissioner’s Office (ICO) in 2020 concluded that human error was the cause of approximately 90 percent of data breaches in prior year. This was up from 61% and 87% in 2018 and 2019. The annual half hour phishing training is important, but it may not be sufficient.

No business wants to send letters to individuals – employees or customers – informing them about a data breach. Businesses also do not want to have their proprietary and confidential business information, or that of their clients or customers, compromised. Unfortunately, no “silver bullet” exists to prevent important data from being accessed, used, disclosed or otherwise handled inappropriately – not even encryption. Companies must simply manage this risk though reasonable and appropriate safeguards. Because employees are a significant source of risk, steps must be taken to manage that risk, and one of those steps is training.

Check out our white paper on employee privacy and cybersecurity training programs.

It is a mistake to believe that only businesses in certain industries like healthcare, financial services, retail, education and other heavily regulated sectors have obligations to train employees about data security. Recent Department of Labor guidance concerning cybersecurity best practices for retirement plans includes a recommendation to: “Conduct periodic cybersecurity awareness training.” Indeed, a growing body of law coupled with the vast amounts of data most businesses maintain should prompt all businesses to assess their data privacy and security risks, and implement appropriate awareness and training programs.

Data privacy and security training can take many forms. Here are some questions to ask when setting up your own program, which are briefly discussed in the whitepaper at the link above:

  • Who should design and implement the program?
  • Who should be trained?
  • Who should conduct the training?
  • What should the training cover?
  • When and how often?
  • How should training be delivered?
  • Should training be documented?

No system is perfect, however, and even a good training program will not prevent data incidents from occurring. But the question you will have to answer for the business is not why didn’t the company have a system in place to prevent all inappropriate uses or disclosures. Instead, the question will be whether the business had safeguards that were compliant and reasonable under the circumstances.