On March 6, 2024, New Hampshire’s Governor signed Senate Bill 255, which establishes a consumer data privacy law for the state. The Granite State joins the myriad of state consumer data privacy laws. It is the second state in 2024 to pass a privacy law, following New Jersey. The law shall take effect January 1, 2025.

To whom does the law apply?

The law applies to persons who conduct business in the state or persons who produce products or services targeted to residents of the state that during a year period:

  • Controlled or processed the personal data of not less than 35,000 unique consumers, excluding personal data controlled or processed solely for the purpose of completing a payment transaction; or,
  • Controlled or processed the personal data of not less than 10,000 unique consumers and derived more than 25 percent of their gross revenue from the sale of personal data.

The law excludes certain entities such as non-profit organizations, entities subject to the Gramm-Leach-Bliley Act, and covered entities and business associates under HIPAA.

Who is protected by the law?

The law protects consumers defined as a resident of New Hampshire. However, it does not include an individual acting in a commercial or employment context.

What data is protected by the law?

The law protects personal data defined as any information linked or reasonably linkable to an identified or identifiable individual. Personal data does not include de-identified data or publicly available information. Other exempt categories of data include without limitation personal data collected under the Family Educational Rights and Privacy Act (FERPA), protected health information under HIPAA, and several other categories of health information.

What are the rights of consumers?

Consumers have the right under the law to:

  • Confirm whether or not a controller is processing the consumer’s personal data and accessing such personal data
  • Correct inaccuracies in the consumer’s personal data
  • Delete personal data provided by, or obtained about, the consumer
  • Obtain a copy of the consumer’s personal data processed by the controller
  • Opt-out of the processing of the personal data for purposes of target advertising, the sale of personal data, or profiling in furtherance of solely automated decisions that produce legal or similarly significant effects. Although subject to some exceptions, a “sale” of personal data under the New Hampshire law includes the exchange of personal data for monetary or other valuable consideration by the controller to a third party, language similar to the California Consumer Privacy Act (CCPA).

When consumers seek to exercise these rights, controllers shall respond without undue delay, but no later than 45 days after receipt of the request. The controller may extend the response period by 45 additional days when reasonably necessary. A controller must establish a process for a consumer to appeal the controller’s refusal to take action on a request within a reasonable period of the decision. As with the CCPA, controllers generally may authenticate a request to exercise these rights and are not required to comply with the request if they cannot authenticate, provided they notify the requesting party.

What obligations do controllers have?

Controllers have several obligations under the New Hampshire law. A significant obligation is the requirement to provide a “reasonably accessible, clear and meaningful privacy notice” that meets standards established by the secretary of state and that includes the following content:

  • The categories of personal data processed by the controller;
  • The purpose for processing personal data;
  • How consumers may exercise their consumer rights, including how a consumer may appeal a controller’s decision with regard to the consumer’s request;
  • The categories of personal data that the controller shares with third parties, if any;
  • The categories of third parties, if any, with which the controller shares personal data; and
  • An active electronic mail address or other online mechanism that the consumer may use to contact the controller.

This means that the controller needs to do some due diligence in advance of preparing the notice to understand the nature of the personal information it collects, processes, and maintains.

Controllers also must:

  • Limit the collection of personal data to what is adequate, relevant, and reasonably necessary in relation to the purposes for which such data is processed, as disclosed to the consumer. As with other state data privacy laws, this means that controllers must give some thought to what they are collecting and whether they need to collect it;
  • Not process personal data for purposes that are neither reasonably necessary to, nor compatible with, the disclosed purposes for which such personal data is processed, as disclosed to the consumer unless the controller obtains the consumer’s consent;
  • Establish, implement, and maintain reasonable administrative, technical, and physical data security practices to protect the confidentiality, integrity, and accessibility of personal data appropriate to the volume and nature of the personal data at issue. What is interesting about this requirement, which exists in several other privacy laws, is that this security requirement applies beyond more sensitive personal information, such as social security numbers, financial account numbers, health information, etc.;
  • Not process sensitive data concerning a consumer without obtaining the consumer’s consent, or, in the case of the processing of sensitive data concerning a known child, without processing such data in accordance with COPPA. Sensitive data means personal data that includes data revealing racial or ethnic origin, religious beliefs, mental or physical health condition or diagnosis, sex life, sexual orientation, or citizenship or immigration status; the processing of genetic or biometric data for the purpose of uniquely identifying an individual; personal data collected from a known child; or, precise geolocation data;
  • Not process personal data in violation of the laws of this state and federal laws that prohibit unlawful discrimination against consumers;
  • Provide an effective mechanism for a consumer to revoke the consumer’s consent that is at least as easy as the mechanism by which the consumer provided the consumer’s consent and, upon revocation of such consent, cease to process the data as soon as practicable, but not later than fifteen days after the receipt of such request; and
  • Not process the personal data of a consumer for purposes of targeted advertising, or sell the consumer’s personal data without the consumer’s consent, under circumstances where a controller has actual knowledge, and willfully disregards, that the consumer is at least thirteen years of age but younger than sixteen years of age.  
  • Not discriminate against a consumer for exercising any of the consumer rights contained in the New Hampshire law, including denying goods or services, charging different prices or rates for goods or services, or providing a different level of quality of goods or services to the consumer.

In some cases, such as when a controller processes sensitive personal information as discussed above or for purposes of profiling, it must conduct and document a data protection assessment for those activities. Such assessments are required for the processing of data that presents a heightened risk of harm to a consumer.  

Are controllers required to have agreements with processors?

As with the CCPA and other comprehensive data privacy laws, the law appears to require that a contract between a controller and a processor govern the processor’s data processing procedures with respect to processing performed on behalf of the controller. 

Among other things, the contract must require that the processor:

  • Ensure that each person processing personal data is subject to a duty of confidentiality with respect to the data;
  • At the controller’s direction, delete or return all personal data to the controller as requested at the end of the provision of services, unless retention of the personal data is required by law.
  • Upon the reasonable request of the controller, make available to the controller all information in its possession necessary to demonstrate the processor’s compliance with the obligations in this chapter;
  • After providing the controller an opportunity to object, engage any subcontractor pursuant to a written contract that requires the subcontractor to meet the obligations of the processor with respect to the personal data; and
  • Allow, and cooperate with, reasonable assessments by the controller or the controller’s designated assessor, or the processor may arrange for a qualified and independent assessor to conduct an assessment of the processor’s policies and technical and organizational measures in support of the obligations under the law, using an appropriate and accepted control standard or framework and assessment procedure for such assessments.  The processor shall provide a report of such assessment to the controller upon request.

Other provisions might be appropriate in an agreement between a controller and a processor, such as terms addressing responsibility in the event of a data breach and specific record retention obligations.

How is the law enforced?

The attorney general shall have sole and exclusive authority to enforce a violation of the statute.

If you have questions about New Hampshire’s privacy law or related issues please reach out to a member of our Privacy, Data, and Cybersecurity practice group to discuss.

California Invasion of Privacy Act (CIPA) has become a focal point in recent legal battles, particularly within the retail industry. As retailers increasingly adopt technologies like session replay and chatbots to enhance customer experiences, they inadvertently tread into murky legal waters. These technologies, while valuable for optimizing websites and addressing customer inquiries, have faced a barrage of lawsuits and threats. Claimants argue that using these tools without obtaining customer consent amounts to wiretapping or using a “pen register.”

Session-replay software records specific customer interactions on websites, aiding in bug fixes, issue investigation, and market optimization. However, these tools may fall under so-called “two-party consent” statutes. For instance, the California Penal Code § 631 (a) requires consent from all parties involved in a communication. Retailers across various industries—clothing, finance, jewelry, and more—have found themselves in the crosshairs of these lawsuits.

At least 40 lawsuits originating in California have been filed involving CIPA since May 31, 2022. May 2022 was when the U.S. Court of Appeals for the 9th Circuit ruled in Javier v. Assurance IQ that, under CIPA, allparties to a “communication” must consent to that communication. Essentially finding that if a website does not request consent prior to a consumer engaging with a website, recording of any kind would be without valid consent.

As such, retailers with an online presence need to review the use of technologies such as session replay and chatbots and ensure there is a mechanism for consent from the consumer prior to interaction to ensure compliance with CIPA and other statutes that require two-party consent when recording communications.

If you have questions about CIPA compliance or related issues, contact a Jackson Lewis attorney to discuss.

On February 28, 2024, President Biden issued an Executive Order (EO) seeking to protect the sensitive personal data of Americans from potential exploitation by particular countries. The EO acknowledges that access to Americans’ “bulk sensitive personal data” and United States Government-related data by countries of concern can, among other things:

…fuel the creation and refinement of AI and other advanced technologies, thereby improving their ability to exploit the underlying data and exacerbating the national security and foreign policy threats.  In addition, access to some categories of sensitive personal data linked to populations and locations associated with the Federal Government — including the military — regardless of volume, can be used to reveal insights about those populations and locations that threaten national security.  The growing exploitation of Americans’ sensitive personal data threatens the development of an international technology ecosystem that protects our security, privacy, and human rights.

The EO also acknowledges that due to advances in technology, combined with access by countries of concern to large data sets, data that is anonymized, pseudonymized, or de-identified is increasingly able to be re-identified or de-anonymized. This prospect is significantly concerning for health information warranting additional steps to protect health data and human genomic data from threats.

The EO does not specifically define “bulk sensitive personal data” or “countries of concern,” it leaves those definitions to the Attorney General and regulations. However, under the EO, “sensitive personal data” generally refers to elements of data such as covered personal identifiers, geolocation and related sensor data, biometric identifiers, personal health data, personal financial data, or any combination thereof.

Significantly, the EO does not broadly prohibit:

United States persons from conducting commercial transactions, including exchanging financial and other data as part of the sale of commercial goods and services, with entities and individuals located in or subject to the control, direction, or jurisdiction of countries of concern, or impose measures aimed at a broader decoupling of the substantial consumer, economic, scientific, and trade relationships that the United States has with other countries. 

Instead, building on previous executive actions, such as Executive Order 13694 of April 1, 2015 (Blocking the Property of Certain Persons Engaging in Significant Malicious Cyber-Enabled Activities), the EO intends to establish “specific, carefully calibrated actions to minimize the risks associated with access to bulk sensitive personal data and United States Government-related data by countries of concern while minimizing disruption to commercial activity.”

In short, some of what the EO does includes the following:

  • Directs the Attorney General, in coordination with the Department of Homeland Security (DHS), to issue regulations that prohibit or otherwise restrict United States persons from engaging in certain transactions involving bulk sensitive personal data or United States Government-related data, including transactions that pose an unacceptable risk to the national security. Such proposed regulations, to be issued within 180 days of the EO, would identify the prohibited transactions, countries of concern, and covered persons.  
  • Directs the Secretary of Defense, the Secretary of Health and Human Services, the Secretary of Veterans Affairs, and the Director of the National Science Foundation to consider steps, including issuing regulations, guidance, etc. to prohibit the provision of assistance that enables access by countries of concern or covered persons to United States persons’ bulk sensitive personal data, including personal health data and human genomic data.  

At this point, it remains to be seen how this EO might impact certain sensitive personal information or transactions involving the same.

Jackson Lewis will continue to track developments regarding the EO and related issues in data privacy. If you have questions about the Executive Order or related issues contact a Jackson Lewis attorney to discuss.

Artificial intelligence tools are fundamentally changing how people work. Tasks that used to be painstaking and time-consuming are now able to be completed in real-time with the assistance of AI.

Many organizations have sought to leverage the benefits of AI in various ways. An organization, for instance, can use AI to screen resumes and identify which candidates are likely to be the most qualified. The organization can also use AI to predict which employees are likely to leave the organization so retention efforts can be implemented.

One AI use that is quickly gaining popularity is performance management of employees. An organization could use AI to summarize internal data and feedback on employees to create performance summaries for managers to review. By constantly collecting this data, the AI tool can help ensure that work achievements or issues are captured in real-time and presented effectively on demand. This can also help facilitate more frequent touchpoints for employee feedback—with less administrative burden—so that organizations can focus more on having meaningful conversations with employees about the feedback they receive and recommended areas of improvement.

While the benefits of using AI have been well publicized, its potential pitfalls have attracted just as much publicity. The use of AI tools in performance management can expose organizations to significant privacy and security risks, which need to be managed through comprehensive policies and procedures.

Potential Risks

  1. Accuracy of information. AI tools have been known to create outputs that are nonsensical or simply inaccurate, commonly referred to as “AI hallucinations.” Rather than solely relying on the outputs provided by an AI tool, an organization should ensure it independently verifies the accuracy of the outputs provided by the AI tool. Inaccurate statements in an employee’s performance evaluation, for instance, could expose the organization to significant liability.
  2. Bias and discrimination. AI tools are trained using historical data from various sources, which can inadvertently perpetuate biases existing in that data. In a joint statement issued by several federal agencies, the agencies highlighted that the datasets used to train AI tools could be unrepresentative, incorporate historical bias, or correlate data with protected classes, which could lead to a discriminatory outcome. A recent experiment conducted with ChatGPT illustrated how these embedded biases can manifest in the performance management context.
  3. Compliance with legal obligations. In recent years, legislatures at the federal, state, and local levels have prioritized AI regulation in order to protect individuals’ privacy and secure data. Last year, New York City’s AI law took effect requiring employers to conduct bias audits before using AI tools in employment decisions. Other jurisdictions—including California, New Jersey, New York, and Washington D.C.—have proposed similar bias audit legislation. In addition, Vermont introduced legislation that would prohibit employers from relying solely on information from AI tools when making employment-related decisions. As more jurisdictions become active with AI regulation, organizations should remain mindful of their obligations under applicable laws.

Mitigation Strategies

  1. Conduct employee training. Organizations should ensure all employees are trained on the use of AI tools in accordance with organization policy. This training should include information on the potential benefits and risks associated with AI tools, organization policies concerning these tools, and the operation and use of approved AI tools.
  2. Examine issues related to bias. To help minimize risks related to bias in AI tools, organizations should carefully review the data and algorithms used in their performance management platforms. Organizations should also explore what steps, if any, the AI-tool vendor took to successfully reduce bias in employment decisions.  
  3. Develop policies and procedures to govern AI use. To comply with applicable data privacy and security laws, an organization should ensure that it has policies and procedures in place to regulate how AI is used in the organization, who has access to the outputs, to whom the outputs are shared, where the outputs are stored, and how long the outputs are kept. Each of these important considerations will vary across organizations, so it is critical that the organization develops a deeper understanding of the AI tools sought to be implemented.

For organizations seeking to use AI for performance management of employees, it is important to be mindful of the risks associated with AI use. Most of these risks can be mitigated, but it will require organizations to be proactive in managing their data privacy and security risks.  

On February 13, 2024, Nebraska’s Governor signed Legislative Bill 308, which enacts additional consumer protections for consumers in the state. It is similar to another genetic information law passed by Montana last year.

The law takes effect July 17, 2024 (90 days after the legislature adjourns on April 18, 2024).  

Covered Businesses

The law applies to direct-to-consumer genetic testing companies which are defined as an entity that:

  • Offers consumer genetic testing products or services directly to a consumer; or,
  • Collects, uses, or analyzes genetic data that resulted from a direct-to-consumer genetic testing product or service and was provided to the company by the consumer.

The law does not cover entities that are solely engaged in collecting, using, or analyzing genetic data or biological samples in the context of research under federal law.

Covered Consumers

The law applies to an individual who is a resident of the State of Nebraska.

Obligations Under the Law

Under the new law covered businesses would be required to:

  • Provide clear and complete information regarding the company policies and procedures for the collection, use, or disclosure of genetic data
  • Obtain a consumer’s consent for the collection, use, or disclosure of the consumer’s genetic data
  • Require a valid legal process before disclosing genetic data to any government agency, including law enforcement, without the consumer’s express written consent
  • Develop, implement, and maintain a comprehensive security program to protect a consumer’s genetic data from authorized access, use, or disclosure

Similar to several comprehensive consumer privacy laws, the company must provide a consumer with:

  • Access to their genetic data
  • A process to delete an account and genetic data
  • A process to request and obtain written documentation verifying the destruction of the consumer’s biological sample

Enforcement

Under the new law, the Nebraska Attorney General may bring an action on behalf of a consumer to enforce rights under the law. There is no private right of action specified within the statute.

A violation of the act is subject to a civil penalty of $2,500 per violation, in addition to actual damages, costs, and reasonable attorney’s fees.

If you have questions about Nebraska’s genetic privacy law or related issues please reach out to a member of our Privacy, Data, and Cybersecurity practice group to discuss.

In 2023, a California superior court halted enforcement of any final California Privacy Protection Agency regulation implemented until a period of 12 months from the date that individual regulations became final. Based on the ruling, enforcement of the initial regulations passed in March 2023 could not commence until March 2024.

The California Privacy Protection Agency (CPPA) appealed the decision and on February 9, 2024, the California Court of Appeal reversed the superior court. With the reversal, the regulations enacted last year are now deemed active in advance of March.

The ruling also enables the CPPA to immediately begin enforcing other future regulations as soon as they are finalized, rather than having to wait a year as previously ruled by the superior court.

The regulations passed in March 2023 were intended to:

  1. Update existing regulations to fit with amendments made by the California Privacy Rights Act (CPRA).
  2. To put into operation new rights and concepts introduced by the CPRA
  3. Make the regulations more streamlined and easier to understand.

The revised regulations include regulations on data processing agreements, consumer opt-out mechanisms, mandatory requirements for recognition of opt-out preference signals, and consumer request handling.

Jackson Lewis will continue to track information related to privacy regulations and related issues. For additional information on the CPRA, please reach out to a member of our Privacy, Data, and Cybersecurity practice group.

For healthcare providers and health systems covered by the privacy and security regulations under the Health Insurance Portability and Accountability Act (HIPAA), a breach of unsecured protected health information (PHI) likely triggers obligations to notify affected individuals, the federal Office of Civil Rights (OCR), potentially the media and other entities. The breach also may require notification to one or more state Attorneys General, an obligation that depends on state law. Currently, the state data breach notification law in Michigan does not provide for Attorney General notification, something Michigan Attorney General Dana Nessel wants to change, according to reporting earlier this month from the HIPAA Journal.

Spurring the Michigan AG are concerns about the timing of notification to patients about recent breaches involving health systems but which were breaches experienced by downstream vendors. These are important concerns considering the increasing identity crimes and overall data risk individuals face, which can be mitigated to some degree with timely notification. However, health systems and entities in other industries can find themselves caught in a tough spot from a notification perspective when dealing with a breach experienced by a vendor.

On the one hand, quickly putting notification in the hands of individuals about a compromise of their personal data is critical to helping those individuals take measures to protect themselves from ID theft and other harms. Notification may prompt individuals to be more vigilant about their personal information, review credit reports, set up a fraud alert, check their bank statements and other measures to protect themselves from cyber criminals.  On the other hand, as a practical matter, the time between the date the breach occurred (as experienced by a downstream vendor) and the date of notification to patients can be affected by many factors, several of which may be outside the control and sometimes the knowledge of the covered entity. Looking solely to that metric in some cases may not be the most appropriate measure of timeliness to assess a covered entity’s performance and compliance when responding to a breach. If it is a metric upon which enforcement can be based, covered entities may need to revisit their incident response plans and vendor relationships to that timeframe as much as possible.

Let’s unpack this a little.

  • Recall that under HIPAA, a breach must be reported “without unreasonable delay and in no case later than 60 calendar days after discovery.” 45 CFR 164.404(b) (emphasis added).
  • A downstream vendor experiencing a breach of PHI likely is (but not always) a business associate of the covered healthcare provider. Of course, the relationship may not be that close. The vendor may be the subcontractor of the subcontractor of the business associate of the covered entity.
  • The general rule under the HIPAA Breach Notification rule is that business associates are obligated to notify the covered entity of a breach, not the affected individuals. See 45 CFR 164.410(a)(1). When there are multiple layers of business associates, a chain of notification commences where one business associate notifies the next business associate upstream and so on until getting to the covered entity. In many cases, the business associate experiencing a breach may not know what entity or entities are the ultimate covered entity(ies). See more on that below.
  • Under the HIPAA Breach Notification rule, business associates are not obligated to notify affected individuals. That obligation, unless delegated, remains with the covered entity. 45 CFR 164.404(a)(1).
  • The HIPAA Breach Notification rule also provides that when a business associate has a breach it must report “the identification of each individual whose unsecured protected health information has been, or is reasonably believed by the business associate to have been, accessed, acquired, used, or disclosed during the breach.” 45 CFR 164.410(c)(1).
  • In some cases, vendors effectively have no access to the PHI that they maintain or store for the ultimate covered entities, but still may be considered business associates. Other similar vendors may fall under a “conduit exception” and not be considered business associates under HIPAA. In either case, they may nonetheless have obligations other than HIPAA (statutory or contractual) to notify their customers of a breach. In these cases, however, the vendors simply may not be in a position to provide critical information upstream, such as identity of the affected individuals.
  • As the reporting of the data breach travels upstream, the covered entity may be completely unaware of the breach. It could be weeks or even months after the breach actually occurred before news of the breach reaches the covered entity. Consider that the vendor that experienced the breach may not have discovered it for some time after the attack happened, further expanding the time between the breach occurring and ultimate notification to patients.
  • Upon discovery of a security incident from a business associate, which already could be long after the breach actually occurred and several layers downstream, the covered entity must initiate its incident response plan. One of the first tasks will be to understand what happened and what data was affected. This news often does not come with a spreadsheet from which the affected individuals could easily be identified. It may instead arrive in the form of a long list of files and folders that contain thousands and thousands of documents, images, records, etc. Many of these items may have no PHI whatsoever. The challenge is to find those documents, images, records, etc. that do, and to pull from those items the individuals affected and the kind of information involved. This process, sometimes referred to as data mining and document review, often is complex, time-consuming, and costly.
  • On completion of the data mining and document review process, the covered entity will begin to have a better sense of the individuals affected, the type of information compromised, the state(s) in which those individuals reside, etc. At this point, covered entities will work quickly to arrange for notification to individuals, the OCR, and, if applicable, the media, state agencies, others. 

There is no doubt that breach notification laws serve an important purpose, namely, to alert affected individuals about a compromise to their sensitive data so that they can take steps to protect against ID theft and other risks. However, the promptness of notice can and often is hampered by factors outside of the covered entity’s control, particularly if the measure of promptness is the time between the date the breach occurred (regardless of what entity experienced the breach) and the date of notification to individuals.

All that being said, there may be some ways that covered entities might tighten up this process. One consideration, of course, is to adopt, regularly assess, and practice an incident response plan. Another is to have a more granular understanding of the data certain vendors are handling for the covered entity. Still another consideration is to revisit the entity’s vendor management program. Looking more closely at downstream service providers beyond direct business associates might be helpful in assessing the notification process and timing should a breach take place downstream. Having more information about downstream vendors, their roles, and the data they process may serve to shorten the notification timeline. Ultimately, even if there is a delay downstream, before the covered entity discovered the breach, a well-executed incident response plan, one that results in a shortened timeframe between discovery and notification, could help to improve the covered entity’s defensible position whether facing a litigation or government agency enforcement action.

To celebrate Data Privacy Day (January 28), we present our top ten data privacy and cybersecurity predictions for 2024.

  1. AI regulations to protect data privacy.

Automated decision-making tools, smart cameras, wearables, and similar applications, powered by technology commonly referred to as “artificial intelligence” or “AI” will continue to expand in 2024 as will the regulations to protect individuals’ privacy and secure data when deploying those technologies. Last year, we saw a comprehensive Executive Order from the Biden Administration, the New York City AI law take effect, and states like Connecticut passed laws regarding the state use of AI. Already in 2024, several states have introduced proposed AI regulation, such as  New York developing an AI Bill of Rights.

The use of “generative AI” also exploded, as several industries sought to leverage its benefits while trying to manage risks. In healthcare, for example, AI and HIPAA do not always mix when it comes to maintaining the confidentiality of protected health information. Additionally, generative AI is not only used for good, as criminal threat actors have enhanced their phishing attacks against the healthcare industry.

  1. The continued expansion of the patchwork of state privacy laws.

In 2023, seven states added comprehensive consumer privacy laws. And several other states enacted more limited privacy laws dealing with social media or health-related data. It looks like 2024 will continue the expansion. Already in 2024, New Jersey has passed its own consumer privacy law, which takes effect in 2025. And New Hampshire is not far behind in potentially passing a statute.

  1. Children’s data protections will expand.

In 2023, several states passed or considered data protection legislation for minors with growing concerns that the Children’s Online Privacy Protection Act (COPPA) was not sufficient to protect children’s data. Connecticut added additional protections for minors’ data in 2023.

In 2024, the Federal Trade Commission (FTC) issued a notice of proposed rulemaking pertaining to COPPA, in addition to several states proposing legislation to protect children’s online privacy.

  1. Cybersecurity audits will become even more of a necessity to protect data.

As privacy protection legislation increases, businesses must start working to protect the data they are collecting and maintaining. The importance of conducting cybersecurity audits to ensure that policies and procedures are in place.

In 2023, there California Privacy Protection Agency considered regulations pertaining to cybersecurity audits. The SEC and FTC expanded obligations for reporting security breaches, making audits, incident response planning, and tabletop exercises to avoid such incidents all the more important.

It is anticipated there will be further regulations and legislation forcing companies to consider their cybersecurity in order to protect individuals’ privacy.

  1. Genetic and health data protection will continue to rise.

In 2023, Nevada and Washington passed health data privacy laws to protect data collected that was not subject to HIPAA. Montana passed a genetic information privacy law. Already this year Nebraska is advancing its own genetic information privacy law. It is likely concerns about health and genetic data will grow along with other privacy concerns and so too will the legislation and regulations. We also have seen a significant uptick in class action litigation in Illinois under the state’s Genetic Information Privacy Act (GIPA). A close relative to the state’s Biometric Information Privacy Act (BIPA), GIPA carried nearly identical remedy provisions, except the amounts of statutory damages are higher than under BIPA.

  1. Continued enforcement actions for data security.

As legislation and regulations grow so too will enforcement actions. Many of the state statutes and city regulations only allow for governmental enforcement, however, those entities are going to start enforcing requirements to ensure there is an incentive for businesses to comply. In 2023, we saw the New York Attorney General continue its active enforcement of data security requirements.

  1. HIPAA compliance will continue to be difficult as it overlaps with cybersecurity.

In 2023, the Office of Civil Rights (OCR) which enforces HIPAA, discussed issues with driving cybersecurity and HIPAA compliance as well as other compliance concerns.  In 2024, entities required to comply with HIPAA will be challenged to determine how to use new and useful technologies and data sharing while maintaining privacy, while also protecting HIPAA-covered information as cybersecurity threats continue to flourish.

  1. Website tracking technologies will continue to be in the hot seat.

In 2023, both the FTC and the Health and Human Services (HHS) took issue with website tracking technologies such as through “pixels”. By the time that guidance was issued, litigation concerning these technologies pertaining to data privacy and data sharing concerns had already been expanding. To help clients identify and address these risks Jackson Lewis and SecondSight joined forces to offer organizations a website compliance assessment tool that has been well received.

In 2024, it is anticipated that there will be further website-tracking litigation as well as enforcement actions from governmental agencies that see the technology as infringing on consumers’ privacy rights.

  1. Expect biometric information to increasingly be leveraged to address privacy and security concerns.

As we move toward a “passwordless” society,  technologies using biometric identifiers and information continue to be the “go-to” method for authentication. However, also increasing are the regulations on the collection and use of biometric information. While the Illinois Biometric Information Privacy Act (BIPA) is most prolific in its protection of biometric information, many of the new comprehensive privacy laws include protections for biometric information. See our biometric law map for developments.  

  1. Privacy class actions will continue to increase.

Whether it is BIPA, GIPA, CIPA, TCPA, DPPA, pixel litigation, or data breach class actions, 2024 will likely see an increase in privacy-related class actions. As such, it becomes more important than ever for businesses to understand and ensure the protection of the data they collect and control.

For these reasons and others, we believe data privacy will continue to be at the forefront of many industries in 2024, and Jackson Lewis will continue to track relevant developments. Happy Privacy Day!

On January 16, 2024, New Jersey’s Governor signed  Senate Bill (SB) 332, which establishes a consumer data privacy law for the state.  New Jersey becomes the 13th state to pass a comprehensive data consumer privacy law. The law would take effect one year after its enactment, on January 16, 2025.

To whom does the law apply?

The law applies to controllers defined as an individual or legal entity that alone or jointly with others determines the purpose and means of processing personal data that do business in New Jersey or produce products or services targeted at New Jersey residents and that during a calendar year either:

  • Control or process the personal data of at least 100,000 consumers, excluding personal data processed solely to complete a payment transaction; or
  • Control or process the personal data of at least 25,000 consumers and the controller derives revenue, or receives a discount on the price of any goods or services, from the sale of personal data.

Who is protected by the law?

Under the law covered consumer is defined as a person who is a resident of New Jersey acting only in an individual or household context. Like several other states, excluding California, the consumer does not include a person acting in a commercial or employment context.

What data is protected by the law?

The law will protect data that qualifies as “personal data” which is information that is linked or reasonably linkable to an identified or identifiable person. It does not include de-identified data or publicly available information.

What are the rights of consumers?

Under the law, a consumer has the following rights:

  • To confirm whether a controller processes the consumer’s personal data and access such personal data.
  • To correct inaccuracies in the consumer’s personal data.
  • To delete personal data concerning the consumer.
  • To obtain a copy of the consumer’s data.
  • To opt out of the processing of personal data for the purposes of targeted advertising, the sale of personal data, or profiling in furtherance of decisions that produce legal or similarly significant effects concerning the consumer.

What obligations do businesses have?

A controller shall provide a consumer with a reasonably accessible, clear, and meaningful privacy notice that shall include but may not be limited to:

  • The categories of the personal data that the controller processes.
  • The purpose of processing personal data.
  • The categories of all third parties to which the controller may disclose a consumer’s personal data.
  • The categories of personal data that the controller shares with third parties, if any
  • How consumers may exercise their consumer rights.
  • The process by which the controller notifies consumers of material changes to the notification.
  • An active e-mail address or other online mechanism that consumers may use to contact the controller.

If the controller sells personal data to third parties or processes personal data for purposes of targeted advertising, the sale of personal data, or profiling on a consumer, the controller shall clearly and conspicuously disclose such sale or processing, as well as the manner in which a consumer may opt out of the sale or processing.

A controller must respond to a verified consumer rights request from a consumer within 45 days of the controller’s receipt of the request. The controller may extend the response period by 45 additional days when reasonably necessary considering the complexity and number of the consumer’s requests.

How is the law enforced?

The attorney general shall have sole and exclusive authority to enforce a violation of the statute.

If you have questions about New Jersey’s privacy law or related issues please reach out to a member of our Privacy, Data, and Cybersecurity practice group to discuss.

Phishing has long been a favorite tactic for threat actors (hackers) to commence a cyberattack. The rapid expansion of more adaptable and available artificial intelligence (AI) technologies, such as natural language processing and large language models, now fuels more ferocious phishing campaigns. The effects are being felt in many industries, perhaps most notably the healthcare industry. One indicator of that may be the recent Office for Civil Rights (OCR) announcement of its “First Ever Phishing Cyber-Attack Investigation

In October 2023, the U.S. Department of Health and Human Services (HHS) and the Health Sector Cybersecurity Coordination Center (HC3) published a white paper entitled, AI-Augmented Phishing and the Threat to the Health Sector, the HC3 Paper. While many have been using ChatGPT and similar platforms to leverage generative AI capabilities to craft client emails, layout vacation itineraries, support coding efforts, and help write school papers, threat actors have been hard at work using the technology for other purposes. According to the HC3 Paper,

Making this even easier for attackers, tools such as FraudGPT have been developed specifically for nefarious purposes. FraudGPT is a generative AI tool that can be used to craft malware and texts for phishing emails. It is available on the dark web and on Telegram for a relatively cheap price – a $200 per month or $1700 per year subscription fee – which makes it well within the price range of even moderately-sophisticated cybercriminals.

The HC3 Paper is informative. It not only outlines some basics about AI and the healthcare industry, but also speaks to helpful countermeasures and best practices. These include:

  • email filtering,
  • employee training and awareness,
  • multifactor authentication, and
  • endpoint security management.

Of course, phishing is nothing new. As noted by the HC3 Paper, the FBI’s Internet Crime Complaint Center (IC3) found that phishing attacks were the number one reported cybercrime in 2022, with over 300,00 complaints reported. And, important for the healthcare industry, phishing was the most common attack impacting healthcare organizations, amounting to nearly half of the attacks in 2021, according to the Healthcare Information and Management Systems Society.

This brings us to the recent OCR’s enforcement action and resolution agreement. According to OCR announcement, a relatively small urgent care center in Louisiana, experienced a HIPAA breach that was initiated by a phishing attack. Reports about the incident suggest the attack affected nearly 35,000 individuals. According to the resolution agreement, the OCR alleged that prior to the incident, the HIPAA covered entity had not conducted a HIPAA Security Rule risk analysis or implemented procedures to regularly review records of information system activity. In addition to the payment of a restitution amount of $480,000, the center agreed to a two-year corrective action plan.

Phishing attacks are serious business and they have become more so since being fueled by AI technologies – the healthcare industry continues to be a prime target. It is critical for covered entities and business associates to not only implement measures to prevent these attacks, but to also be prepared to respond when they occur. Develop and maintain an incident response plan! Back to basics on HIPAA compliance also is critical when responding to an OCR inquiry. Cyberattacks happen and inquiries may follow. Maintaining a record of HIPAA compliance will be one of, if not, the most important element in the response to the OCR or state agency.