On December 16, 2022, the California Privacy Protection Agency (CPPA) had its final meeting before the California Privacy Rights Act (CPRA) which amended the California Consumer Privacy Act takes effect on January 1, 2023. Despite the CPRA taking effect at the start of the year, the CPPA, the agency charged with implementing the law, has not finalized its rulemaking process. It was discussed at the Friday meeting that the final proposed rules are anticipated to be released at the end of January and after going through the various administrative requirements will take effect in April. In the meantime, regulations previously promulgated by the California Attorney General’s Office will remain in effect.

Though it has not finalized its CPRA rulemaking, the CPPA is setting its sights on other rulemaking duties, including the use of artificial intelligence in data collection and businesses’ cybersecurity assessments. The CPPA released sample questions covering these areas which will be finalized and approved in the new year and then released for a comment period in order to collect insights on the framework needed for risk assessments and automated decision-making.

Some of the considerations pertaining to risk assessments that are detailed in the sample questions include laws and other requirements that businesses already have to comply with regarding processing consumers’ personal information that require risk assessments and how those assessments can be aligned with the requirements under the CPRA. Further, the CPPA is considering whether assessments from other privacy statutes and regulations such as the European General Data Protection Regulation and Colorado’s Privacy Act can be used for CPRA purposes.

Similarly, in considering rulemaking regarding automated decision-making, the CPPA is considering questions of other laws requiring access and/or opt-out rights in the context of automated decision-making. The sample questions also seek information about how prevalent algorithmic discrimination based on classification/classes under California and federal law is and if it is more pronounced in some sectors.

Jackson Lewis will continue to track information related to privacy regulations and related issues. For additional information on the CPRA, please reach out to a member of our Privacy, Data, and Cybersecurity practice group.

On January 1, 2023, Virginia’s Consumer Data Protection Act (CPDA) takes effect. Key features of the CPDA include expansive consumer privacy rights (right to access, right of rectification, right to delete, right to opt-out, right of portability, right against automatic decision making), a broad definition of “personal information”, the inclusion of a “sensitive data” category, and data protection assessment obligations for data controllers.

However, the CDPA is not the only privacy and data protection legislation in the Commonwealth. The following are some of the other laws to consider when working on privacy and data protection policies in the state.

Personal Information Privacy Act

This law which predates the CPDA restricts the sale of personal information of customers by merchants as well as the use of social security numbers. For example, with regard to the limitations on the use of social security numbers, a person shall not:

1. Intentionally communicate another individual’s social security number to the general public;

2. Print an individual’s social security number on any card required for the individual to access or receive products or services provided by the person;

3. Require an individual to use his social security number to access an Internet website, unless a password, unique personal identification number, or other authentication device is also required to access the site; or

4. Send or cause to be sent or delivered any letter, envelope, or package that displays a social security number on the face of the mailing envelope or package, or from which a social security number is visible, whether on the outside or inside of the mailing envelope or package.

Insurance Data Security Act

Effective July 1, 2020, Virginia adopted legislation establishing data security requirements applicable to persons licensed by the insurance laws of the Commonwealth. Following several other state laws that have created data security regimes applicable to the insurance industry, the law requires licensees to maintain the security of information systems and nonpublic information. The law also requires licensees to investigate cybersecurity events and to notify individuals and the Commissioner of Insurance. More recently, regulations have been approved effective June 1, 2021. Those regulations provide (i) rules for reporting cybersecurity events; (ii) risk assessment requirements that must be implemented by July 1, 2022; and (iii) additional security measures that must be implemented by July 1, 2022.

Data Breach Notification Law

Since July 2008, Virginia law has required entities doing business in Virginia and state agencies to notify individuals of a breach of their computerized, unredacted, and unencrypted personal information. Under the law, notice is required only if the breach causes, or it is reasonably believed that it has or will cause, identity theft or other fraud to a resident of the Commonwealth.

Similar to the data breach notification laws in other states, such as Massachusetts and New Hampshire, the notification must be provided to the Virginia Attorney General, as well as the affected residents. Also, if more than 1,000 persons would have to be notified at one time, the business would have to notify the Virginia Attorney General and all consumer reporting agencies of the timing, distribution, and content of the notice. Violations of this statute are enforced by the Attorney General, who may seek up to $150,000 in penalties per breach. Individuals also may recover direct economic damages from a violation.

If you have questions about developing a privacy and data compliance plan for Virginia law or related issues please reach out to a member of our Privacy, Data, and Cybersecurity practice group.

In June 2022, the California Privacy Protection Agency (CPPA) Board first started discussions about revising the regulations previously released by the California Attorney General.

In October, the Board released proposed modifications to the regulations in advance of a planned Board meeting. Since then, the Board has rescheduled both Board and public meetings.

The Board seems to be getting closer to a final vote on the regulations but recently published further modifications to regulations. These modifications start a new public comment period that ends November 21st.

Recent updates to the regulations include:

  • Sections clarifying how consumers can opt-out of having their data sold or shared, including via opt-out preference signals.
  • Provisions providing allowances for enforcement flexibility, which are intended to assuage businesses’ concerns that the current delay in adopting final regulations will present compliance challenges.
  • Allowances for businesses, service providers, and contractors to delay compliance with requests to correct archived or backup systems until the data is restored to an active system or is next accessed or used.

Businesses can submit comments regarding the current version of the regulations by:

  • E-mail to: regulations@cppa.ca.gov . Submissions should include “CPPA Public Comment” in the subject line and provide comments within an attachment.
  • Mail to: California Privacy Protection Agency
    Attn: Brian Soublet
    2101 Arena Blvd., Sacramento, CA 95834

Jackson Lewis will continue to track information related to privacy regulations and related issues. For additional information on the CPRA, please reach out to a member of our Privacy, Data, and Cybersecurity practice group.

Responding in part to the nature of the post-COVID-19 remote workplace, NLRB GC Jennifer Abruzzo has released a memo on employers’ use of electronic monitoring and automated management in the workplace. The memo also directs NLRB Regions to submit to the Division of Advice any cases involving intrusive or abusive electronic surveillance and algorithmic management that interferes with the exercise of NLRA Section 7 rights.

Read the full article on Jackson Lewis’ Labor & Collective Bargaining.

We have been quite busy this October, which happens to be National Cybersecurity Awareness Month. But, we did not want to let the month go by without some recognition; and we are grateful to the HHS Office for Civil Rights (OCR) for this always timely reminder for HIPAA covered entities and business associates – have a written incident response plan

Why do we need another policy?

First, because it is required under the HIPAA Security Rule. See 45 CFR 164.308(a)(6). Also, because cybersecurity risks continue to rise. The OCR notes that cybersecurity incidents and data breaches continue to increase in the healthcare sector, citing a 69% increase in cyber-attacks for the first half of 2022 compared to 2021. Breaches of unsecured protected health information (PHI), including electronic PHI, reported to OCR affecting 500 or more individuals increased from 663 in 2020 to 714 in 2021.

Fine, so what does an incident response plan need to include?

The OCR describes some basic elements that should be included in an incident response plan (IRP):

  • identifying security incidents;
  • responding to security incidents;
  • mitigating harmful effects of security incidents; and
  • documenting security incidents and their outcomes.

As we get more specific below, note that each covered entity and business associate is different in several respects, such as size, number of locations, information systems, prior experience, cyber insurance policies, type of PHI, and state laws, just to name a few. So, your specific IRP may vary in significant ways, but these are four critical elements to address for your particular business and practice.

Can you be more specific?

Sure. The organization will want to think about who will be doing the responding – who is on the “security incident response team.” This is a team that is organized and trained to effectively respond to security incidents. OCR offers several areas to consider when forming a team, such as:

  • Have a strong balance of skill sets among team members (IT, legal, communications, etc.)
  • Ensure lines of communication will be available among team members during a crisis
  • Consider external parties that can provide specific expertise concerning incident response
  • Commit to regularly practicing incident response procedures for different types of attacks.

With a team established, the plan should provide for identifying security incidents. Of course, this requires knowing that a security incident is “the attempted or successful unauthorized access, use, disclosure modification, or destruction of information or interference with system operations in an information system .” One way to identify security incidents includes having audit logs in place and regularly reviewing them.

In the event of a security incident, the plan needs cover the steps for responding. This includes containing the security incident and any threat it may pose to ePHI, such as by identifying and removing any malicious code and mitigating any vulnerabilities that may have permitted the security incident to occur. However, to be better prepared to respond to security incidents, the plan should also include procedures such as:

  • Processes to identify and determine the scope of security incidents
  • Instructions for managing the security incident
  • Creating and maintaining a list of assets (computer systems and data) to prioritize when responding to a security incident
  • Conducting a forensic analysis to identify the extent and magnitude of the security incident
  • Reporting the security incident to appropriate internal and external entities
  • Processes for collecting and maintaining evidence of the security incident (e.g., log files, registry keys, and other artifacts) to determine what was accessed during the security incident

After the security incident has been neutralized, the next steps should include mitigation, including recovery and restoration of systems and data to return to normal operations. Mitigation efforts are facilitated through contingency planning, robust data backup, and recovery processes. These are areas that should not be thought about when a security incident occurs. For example, knowing that you have a backup is not enough, regularly making sure you are able to restore from backups while maintaining integrity is key. 

When these steps have been completed, particularly after operations have returned to normal, regulated entities must document their response to the security incident. This is required under HIPAA. The IRP can be helpful in outlining what information to include in the documentation (e.g., discovery of the security incident; systems and data affected; response and mitigation activities; recovery outcomes; root cause analysis; forensic data collected).

What about notification, shouldn’t that be part of the IRP?

Of course. The IRP should address the entity’s reporting obligations, whether to the affected individuals, the OCR, the media, state agencies, or a covered entity (for business associates). A critical aspect of notification is timing. For breaches affecting 500 or more individuals, notice is required without unreasonable delay and no later than 60 calendar days from the discovery of the breach. The OCR reminds regulated entities:

the time period [for reporting] begins when the incident is first known, not when the investigation of the incident is complete, even if it is initially unclear whether the incident constitutes a breach as defined in the rule. 

Further, 60 days is the outer limit for notification but,

in some cases, it may be an ‘unreasonable delay’ to wait until the 60th day to provide notification.

There is a lot more that can be said about IRPs, and it is not a good idea to wait until the next National Cybersecurity Awareness Month to craft one. Also, while directed to healthcare providers and their business associates, the same kind of planning is prudent for just about all organizations. 

Over the past several years, there has been a significant increase in the use of dashcam technology. The technology available in the market is quite advanced. As we observed here, these devices can be equipped with geolocation, AI, facial recognition, and other technologies.  Designed primarily to enhance driver safety and fleet management, privacy concerns are tapping the brakes on implementation in California.

On September 29, 2022, Governor Gavin Newsom signed into law AB-984, and becoming effective January 1, 2023. The law builds on other privacy protections in California, such as the California Consumer Privacy Act and Penal Code Sec. 637.7. Section 637.7 prohibits using an electronic tracking device to determine the location or movement of a person, however, it does not apply when the vehicle owner (e.g., the employer) has consented to the use of the device.

Among other exciting provisions, including the latest in vehicle tech – digital license plates, AB-984 places significant restrictions on the use of an alternative device to monitor employees. Specifically, the law provides:

An employer, or a person acting on behalf of the employer, shall not use an alternative device to monitor employees except during work hours, and only if strictly necessary for the performance of the employee’s duties.

The statute defines monitoring to include, without limitation, “locating, tracking, watching, listening to, or otherwise surveilling the employee.” However, there is no definition of “strictly necessary,” making the statute more difficult to navigate.

Employers that choose to install such a device must provide notice to employees prior to monitoring with the device. That notice must, at a minimum, include the following:

(A) A description of the specific activities that will be monitored.

(B) A description of the worker data that will be collected as a part of the monitoring.

(C) A notification of whether the data gathered through monitoring will be used to make or inform any employment-related decisions, including, but not limited to, disciplinary and termination decisions, and, if so, how, including any associated benchmarks.

(D) A description of the vendors or other third parties, if any, to which information collected through monitoring will be disclosed or transferred. The description shall include the name of the vendor or third party and the purpose for the data transfer.

(E) A description of the organizational positions that are authorized to access the data gathered through the alternative device.

(F) A description of the dates, times, and frequency that the monitoring will occur.

(G) A description of where the data will be stored and the length of time it will be retained.

(H) A notification of the employee’s right to disable monitoring, including vehicle location technology, outside of work hours.

Employers that fail to comply can be subject to significant penalties. A civil penalty of $250 can be imposed for an initial violation, while a $1,000 per employee can be imposed for each subsequent violation. The statute expressly provides that penalties “shall be assessed per employee, per violation, and per day that monitoring without proper notice is conducted.”

In addition to penalties, employer have additional exposure if found to have retaliated against an employee for removing or disabling an alternative device’s monitoring capabilities outside of work hours. In this case, the employee “shall be entitled to all available penalties, remedies, and compensation, including, but not limited to, reinstatement and reimbursement of lost wages, work benefits, or other compensation caused by the retaliation.”

For employers considering using an alternative device to monitor employees in vehicles, there are at least two steps to take:

  • Assess whether doing so is “strictly necessary” for the performance of the employee’s duties
  • Provide advance notice of the monitoring

There are several other issues to consider as well, just looking at the items required to be included in the notice.

On October 21 and 22, the California Privacy Protection Agency (CPPA) Board will meet to discuss possible action regarding the proposed regulations for the California Consumer Privacy Act (CCPA) and California Privacy Rights Act (CPRA).

Previously, in June 2022, the Board met to discuss revising the regulations previously released by the California Attorney General.  

In advance of the October CPPA Board meeting, further proposed modifications to the regulations have been published, along with an explanation of the proposed changes.

Some of the more significant changes include:

  • Revised Section 7002 regarding the “Restrictions on the Collection and Use of Personal Information” to clarify specific requirements. The revision sets forth factors to be considered in evaluating the collection and use including: (1) the reasonable expectations of a consumer concerning the purpose for which personal information is collected or processed, (2) the purposes that are compatible with the context in which the personal information is collected, and (3) whether collecting or processing personal information is reasonably necessary and proportionate to achieve those purposes.
  • Revised Section 7004 regarding the “Requirements for Methods for Submitting CCPA Requests and Obtaining Consumer Consent” to explain how different user interfaces can impair or interfere with consumers’ choice and can fail to meet the definition of consent under the Civil Code.
  • Also revised Section 7004 (a)(2) to clarify that the symmetry in choice principle also considers whether different paths are more difficult or time-consuming.
  • Revised Section 7052 regarding “Third Parties” to clarify that third parties are contractually required to treat the personal information that businesses make available to them, in the same manner, the business is required to treat it under the CCPA.

It is possible that at the October meeting, the CPPA could elect to adopt the modified regulations or choose to make further changes.

Jackson Lewis will continue to track information related to privacy regulations and related issues. For additional information on the CPRA, please reach out to a member of our Privacy, Data, and Cybersecurity practice group.

In July 2020, the Court of Justice of the European Union (CJEU) declared the EU-U.S. Privacy Shield invalid. The EU-U.S. Privacy Shield program was designed to provide European Economic Area (EEA) data transferred to the U.S. with a level of protection comparable to EU law. The CJEU invalidated the program stating that U.S. companies could not provide an essentially equivalent level of protection based on the breadth of U.S. national security surveillance laws, FISA 702, E.O. 12.333, and PPD 28. In the wake of the decision, businesses relying on the EU-U.S. Privacy Shield as an adequate transfer mechanism to perform routine activities such as sending employee data from the EEA to U.S. headquarters for HR administration, accessing a global HR database from the U.S., remotely accessing EEA user accounts from the U.S. for IT services, providing EEA data to third party vendors for processing in the U.S., or relying on certain cloud-based services were forced to rely on alternate mechanisms including standard contractual clauses.

On October 7, 2022, President Biden signed an Executive Order that outlines steps the U.S. government will take to implement a new EU-U.S. data privacy framework, the Trans-Atlantic  Data Privacy Framework, to replace the invalidated EU-U.S. Privacy Shield.

The new Framework is designed to restore a legal basis for transatlantic data flows and addresses concerns raised in the CJEU decisionby strengthening privacy and civil liberties protections for foreign individuals and creating an independent and binding process for non-U.S. citizens to seek redress if they believe their personal data was improperly collected through U.S. signals intelligence. Signals intelligence activities involve collecting foreign intelligence from communications and information systems. 

The Executive Order is the first step toward rebuilding the EU-U.S. data protection program. Over the next few months, the EU Commission will review the framework and if satisfied with the proposed safeguards and protections for EU data and individuals, issue an “adequacy decision” that concludes data transferred to the U.S. will receive an essentially equivalent level of protection. While legal challenges to this new framework are anticipated, the Executive Order demonstrates a U.S. commitment to addressing EU concerns regarding data protection. It also provides an incentive to U.S. organizations to maintain their EU-US Privacy Shield certification in hopes it can be leveraged under the new framework.

If you have questions about the effect of the Executive Order on your business or related issues contact the Jackson Lewis attorney with whom you regularly work or a member of our Privacy, Data, and Cybersecurity practice group.

On October 3, 2022, the White House Office of Science and Technology Policy published its “Blueprint for an AI Bill of Rights.” This adds to prior federal guidance released by the EEOC and DOJ regarding the use of AI in employment decisions.

The framework published by the White House is intended to apply to automated systems that have an impact on individuals’ “rights, opportunities, or access to critical resources or services.”

The blueprint sets forth five protections to which individuals should be entitled:

  • Safe and effective systems
  • Protection from algorithmic discrimination
  • Data Privacy
  • Notice and explain when an automated system is being used and how it impacts the individual
  • Ability to opt out of automated systems and have access to people who can remedy issues

The framework is intended to assist in putting guardrails in place in the use of AI and automated systems. In conjunction with the publishing of the blueprint, the Biden-Harris Administration announced actions across the federal government to advance protections for workers and employers, students, patients, and more.

These initiatives include the Department of Labor’s release of “What the Blueprint for AI Bill of Rights Means for Workers” and its ramping of enforcement of required surveillance reporting to protect worker organizing. There are also consumer protections noted such as the Federal Trade Commission’s recent consideration of rulemaking on consumer privacy and data. And many others related to education and health care.

The Administration’s announcement is consistent with steps taken during the Trump Administration. It also is generally consistent with principles for AI established by the Organization for Economic Cooperation and Development (OECD). The OECD is a global organization established in 1961 to promote economic cooperation and development with nearly 40 members, including the United States. In 2019, U.S. National Telecommunications and Information Administration joined OECD in adopting global AI principles. Among other things, the OECD’s Principles on Artificial Intelligence provide that AI actors should:

“respect the rule of law, human rights, and democratic values, throughout the AI system lifecycle. These include freedom, dignity, and autonomy, privacy and data protection, nondiscrimination and equality, diversity, fairness, social justice, and internationally recognized labour rights.”

“provide meaningful information… (i) to foster a general understanding of AI systems, (ii) to make stakeholders aware of their interactions with AI systems, including in the workplace, (iii) to enable those affected…to understand the outcome, and (iv) to enable those adversely affected… to challenge its outcome”

While this latest blueprint for use of AI is only guidance at this time, it signals the direction the federal government intends to take with future regulation and legislation when it comes to automated systems and related technology. And, it builds on a set of principles emerging globally that seek to ensure the appropriate use of AI, principles that we are seeing embedded in laws in the U.S. such as the law regulating “automated employment decision tools” going into effect in New York City in 2023.

Businesses and employers who use AI and automated systems need to consider the Administration’s guidance along with emerging laws, regulations, and principles to guide their adoption and application of AI. This includes developing policies and procedures that establish protections to avoid potential discrimination or breaches of privacy.

If you have questions about developing policies and procedures around the use of AI and automated systems contact the Jackson Lewis attorney with whom you regularly work or a member of our Privacy, Data, and Cybersecurity practice group.

California passed Assembly Bill (AB) 2089, which amends the Confidentiality of Medical Information Act (CMIA) to include mental health application information under the definition of medical information. Under the revisions to CMIA, mental health application information is defined as information related to a consumer’s inferred or diagnosed mental health or substance use disorder, as defined in Section 1374.72 of the Health and Safety Code, collected by a mental health digital service.  Similarly, “mental health digital service” is defined as a mobile-based application or internet website that collects mental health application information from a consumer, markets itself as facilitating mental health services to a consumer, and uses the information to facilitate mental health services to a consumer.

Under AB 2089 any business that offers a mental health digital service to a consumer for the purpose of allowing the individual to manage the individual’s information, or for the diagnosis, treatment, or management of a medical condition of the individual, is deemed to be a provider of health care subject to the requirements of the CMIA.

Moreover, the bill requires any business that offers a mental health digital service, when partnering with a provider of health care, to provide to the health care provider information regarding how to find data breaches reported, as specified, on the internet website of the Attorney General.

If you have questions about AB 2089 or related issues contact the Jackson Lewis attorney with whom you regularly work or reach out to a member of our Privacy, Data, and Cybersecurity practice group