Much is being written about “remote work” – is it productive, will demand for it continue or be curtailed in a recession, is cybersecurity compromised, does it inhibit workplace culture, collaboration, etc. Lots of questions, few clear answers. The discussion seems largely centered on office workers, professional services providers like me, who generally can perform the basic functions of our jobs just about anywhere.

But “virtual” nurses?

Well, yes, this should not come as a surprise, considering the growth of telehealth, in particular during the COVID-19 pandemic. For many reasons, using digital information and communication technologies to deliver healthcare services can provide enormous benefits to the overall healthcare system. Indeed, predictions from many leaders in healthcare see expanded use of remote patient care and monitoring, along with other technologies such as artificial intelligence, robotics, and wearables.

As with any significant shift in an organization’s business model, however, there are likely to be some challenges and risks. Among those risks is that individually identifiable health information of patients can become potentially more vulnerable in a remote work environment.

Keep in mind that large health systems are not the only providers of healthcare that can benefit from the virtual delivery of certain healthcare services, including patient monitoring. Similar benefits can be derived by “home” healthcare providers, mental health counselors and therapists, surgical practices, other categories of providers, and their patients. Yet, many of the same data-related risks and compliance challenges remain:

HIPAA Compliance. There are many aspects of the HIPAA privacy and security regulations that need to be considered. Covered entities should conduct and document a risk assessment to understand the threats and vulnerabilities of a new or enhanced remote work arrangement (including new devices and equipment that facilitate the arrangement). Policies and procedures may need to be amended or created based on the findings of the assessment, such as enhanced security and training, review of remote work environment, revisions to data retention and destruction procedures, and procedures related to a change or termination of a remote worker’s employment.

Contractual Requirements. Providers may have contract obligations limiting their ability to deliver services remotely. It is not uncommon to see contact terms prohibit storing protected health information outside the US, for instance. Providers need to understand whether remote worker services are within scope of such agreements, and what needs to be done to comply.

Insurance Coverage. In the case of a security incident or data breach, a cyber insurance policy can be vitally important to a healthcare organization. Verifying coverage applies to a new remote work arrangement is better performed before the incident than in the middle of the investigation.

The “Remote Work” policy. Providers need to think about the environment healthcare workers will be working in when remote, and what policies are necessary and prudent. Clearly, secure connections are needed for workers to be able to access patient data, communicate with patients, and satisfy charting, reporting, billing, and other related obligations.

Questions about work location, access to systems, distractions, monitoring, and others require a careful look at the effectiveness of an organization’s remote work policy. Recall that protecting patient data is not limited to confidentiality and security, the integrity of medical data is vital. It goes without saying, however, that the issues here extend beyond data privacy and security to include employee relations, patient relations, efficiency, compliance with wage and hour laws, ease and effectiveness of management, and productivity to name a few.

Monitoring performance. A significant concern for managers of remote workers is the ability to manage – being able to train newer workers, coach more senior workers, and monitor performance, among other things. Again, technology can be helpful here, but can raise additional risks. Recording calls, tracking employee keystrokes on the system, capturing screenshots, and requirements for employees to remain on camera during work hours can all be effective monitoring tools. However, they can come with compliance requirements, significant legal risks, and employee relations challenges. Providers also need to consider who monitors the monitors. A task that often falls to the IT department, it can invite abuses even if the activity is well-intended.

Delivering healthcare remotely is an exciting development and promises to deliver enormous benefits, particularly for a national and dynamic healthcare system facing staffing shortages and other systemic challenges. However, care needs to be taken when implementing to help minimize legal and compliance risks, and to maintain a high level of care, patient satisfaction, accessibility, and employee relations.

In 2021, New York City enacted a measure that banned the use of Automated Employment Decision-Making Tools (“AEDT”) to (1) screen job candidates for employment, or (2) evaluate current employees for promotion, unless the tool has been subject to a “bias audit, conducted not more than one year prior to the use of the tool.” The law also required certain notifications regarding the use of AEDTs to be made to job seekers. The measure, known as Local Law 144 of 2021, was set to take effect on January 1, 2023.

In September 2022, the NYC Department of Consumer and Worker Protection (DCWP) issued guidance about the new ordinance and announced it was hosting an initial public hearing. Following the hearing, DWCP announced the law would not be enforced until April 1, 2023, due to the large number of public comments it received in response to prior hearings.  

At the end of December 2022, DCWP released revised proposed rules to implement the ordinance and scheduled a further public hearing for January 23, 2023. These proposed rules modify the initial proposed rules. The comment period for the proposed regulations will remain open until January 16, 2023.

Here are some of the important highlights of the recently released rules:

Modification of the Definition of Automated Employment Decision Tools (AEDT)

Under the ordinance, an AEDT is defined as any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision-making employment decisions that impact natural persons.

The latest proposed rules seek to clarify this definition by stating that the phrase “to substantially assist or replace discretionary decision making” means:

            (i) to rely solely on a simplified output (score, tag, classification, ranking, etc.), with no other factors considered;

            (ii) to use a simplified output as one of a set of criteria where the simplified output is weighted more than any other criterion in the set; or

            (iii) to use a simplified output to overrule conclusions derived from other factors including human decision-making.

Clarification Regarding Bias Audits

The proposed rules also aim to clarify the meaning and scope of bias audits and independent auditors.

Bias Audits – The proposed rules indicate that historical data may be used to conduct a bias audit. Notably, if there is insufficient historical data to conduct a statistically sound bias audit, test data may be used. But if test data is utilized, the required bias audit summary must explain the reason(s) historical data was not used and describe how the test data used was generated and obtained. And if multiple employers are using the same AEDT, they may rely upon the same bias audit so long as they provide historical data, if available, for the independent auditor to consider in such bias audit. Employers must ensure that they are relying on bias audits that are no greater than one year old.

Independent Auditors – The proposed rules further seek to end any uncertainty as to what constitutes an “independent auditor.” Under the new definition, an “independent auditor” may not be employed or have a financial interest in an employer or employment agency that seeks to use or continue to use an AEDT or in a vendor that developed or distributed the AEDT.

Understandably, these changes only represent a fraction of the proposed rules that will be discussed at the upcoming hearing.

Jackson Lewis will continue to track guidance and changes pertaining to regulations pertaining to AI and automated decision-making. If you have questions about the NYC ordinance or related issues, contact a Jackson Lewis attorney to discuss.

Continuing its initiative regarding the use of data, automated processes, and artificial intelligence (“AI”), the U.S. Equal Employment Opportunity Commission (“EEOC”) is holding a hearing on January 31, 2023 for examining the use of automated systems and AI in employment decisions.

This in-person hearing will begin at 10:00am EST on January 31 at the EEOC headquarters in Washington DC and will be livestreamed. There is also an option for listening via telephone.

Read the full article on Jackson Lewis’ Data Intelligence Reporter.

Last month, the U.S. Department of Health and Human Services (HHS) Office for Civil Rights (OCR) issued a bulletin with guidance concerning the use of online tracking technologies by covered entities and business associates under the Health Insurance Portability and Accountability Act (HIPAA). The OCR Bulletin follows a significant uptick in litigation concerning these technologies in industries including but not limited to the healthcare. For healthcare entities, the allegations relate to the sharing of patient data obtained from patient portals and websites.

THE OCR BULLETIN

A Few Reminders

Before digging into the OCR Bulletin, let’s remember a few basic HIPAA rules:

  • In general, the HIPAA privacy and security regulations (the “HIPAA rules”) apply only to “covered entities” and “business associates” (we’ll call these “regulated entities”).
  • The HIPAA Rules apply to “protected health information” (PHI) which generally includes individually identifiable health information. That is, health information that relates to the individual’s past, present, or future health, health care, or payment for care, including demographic information. See 45 CFR 160.103.
  • Regulated entities can use or disclose PHI without an individual’s written authorization only as expressly permitted or required by the HIPAA Rules. See 45 CFR 164.502(a).

Definition of Tracking Technologies and Their Uses

As discussed in the OCR Bulletin, an online tracking technology is

a script or code on a website or mobile app used to gather information about users as they interact with the website or mobile app

Examples of these tracking technologies on websites include cookies, web beacons, or tracking pixels. Mobile apps may use tracking technologies such as tracking codes within the app, as well as captures of device-related information. As noted in the Bulletin,

For example, mobile apps may use a unique identifier from the app user’s mobile device, such as a device ID or advertising ID. These unique identifiers, along with any other information collected by the app, enable the mobile app owner or vendor or any other third party who receives such information to create individual profiles about each app user

Tracking technologies, whether developed internally or by third parties, are used by website or mobile app owners for various reasons, including to better understand the user experience on their site or app. Technologies developed by third parties may be able to track users and gather information after they navigate away from the original site. The OCR Bulletin focuses on third party tracking technologies.  

Why Do Tracking Technologies Trigger HIPAA?

When a regulated entity uses tracking technologies developed by a third party vendor on its mobile app or website, such use may result in the collection and/or disclosure of PHI to the third party.

The Bulletin states:

All such IIHI collected on a regulated entity’s website or mobile app generally is PHI, even if the individual does not have an existing relationship with the regulated entity and even if the IIHI, such as IP address or geographic location, does not include specific treatment or billing information like dates and types of health care services.

(emphasis added.) So, according to the OCR, individuals with or without an existing patient relationship with the regulated entity could be sharing PHI with the entity (or a third party) through its website tracking technologies. This information might include an individual’s medical record number, home or email address, or dates of appointments, as well as an individual’s IP address or geographic location, medical device IDs, etc.

Notably, not all such technologies will be collecting identifiable information. The Bulletin recognizes a distinction between user-authenticated and unauthenticated webpages. User-authenticated pages require a user to log in before access to the regulated entity’s page. According to the Bulletin, information collected on a user-authenticated webpage will be presumed to be PHI and subject to HIPAA.

Many regulated entities maintain unauthenticated webpages – those that do not require a log in for access. Typically, these are sites that provide general information only – locations, description of services, policies and procedures etc., and generally do not have access to PHI. For unauthenticated web pages, the determination is more detailed as tracking technologies on such webpages typically would not have access to PHI. However, regulated entities should be aware that tracking on such pages could capture PHI. Sites that address specific symptoms or health conditions, or that permit a visitor to search for a doctor or schedule an appointment may qualify as PHI, where, for example, the visitor’s email address or IP address is also captured.

Importantly, the Bulletin clarifies the HIPAA Rules do not apply to websites or mobile apps that are developed or offered by entities that are not regulated entities. For instance, a mobile app provider may offer individuals an online repository or tracking feature for their sensitive health information. If that provider if not a regulated entity, the HIPAA Rules do not apply, although other federal and/or state laws may, such as Federal Trade Commission (FTC) Act or state comprehensive privacy laws, such as the California Consumer Privacy Act. Notably, in September 2021, the FTC issued a policy statement confirming that covered companies (e.g., certain health apps) that hold fertility, heart health, glucose levels and other health data must notify consumers in the event of a breach.  

HIPAA Obligations When Using Tracking Technologies

When a regulated entity uses tracking technologies on its website(s) or mobile app(s), it may have obligations under the HIPAA Rules. While we cannot cover all of those requirements here, we summarize some key obligations:

  • Investigate whether the site or app has access to PHI. As noted above, do not assume that because the site is unauthenticated or only collects email addresses, it is not collecting PHI.  
  • Ensure that all disclosures of PHI to tracking technology vendors are specifically permitted by the Privacy Rule and that unless an exception applies, only the minimum necessary PHI to achieve the intended purpose is disclosed.
    • Remember that if a disclosure of PHI requires an authorization under HIPAA, website privacy policies and website banners that ask users to accept or reject the use of tracking activities, standing alone, will be unlikely to constitute a valid authorization.
    • If a tracking technology vendor is creating, receiving, maintaining, or transmitting PHI on behalf of a regulated entity for a covered function, it will likely be considered a business associate. In that case, a business associate agreement may need to be in place between the regulated entity and the vendor.
  • Address the use of tracking technologies in a risk analysis and risk management processes, and implement safeguards in accordance with the HIPAA security regulations.
  • Provide breach notifications to affected individuals and the OCR if impermissible disclosures of PHI occur via tracking technology.

LITIGATION.

During 2022, litigation concerning the use of website tracking technologies increased significantly. In one report, a health system settled claims for $18 million, while in another case, the plaintiffs alleged over 650 hospital system or medical provider websites use the Meta Pixel tracking tool and have sent data from those sites.

The trend does not just involve HIPAA regulated entities or HIPAA. According to a Bloomberg Law analysis, between February and October 2022, at least 47 proposed class actions were filed alleging transfers of “personal video consumption data from online platforms to Facebook without their consent,” in violation of the federal Video Privacy Protection Act.

For regulated entities under HIPAA, it is not much comfort that HIPAA does not have a private right of action for individuals. Plaintiffs are using other paths under similar federal and state laws to advance their claims. The trend is growing, but there are steps regulated entities can take to address these risks.

NEXT STEPS

Covered entities and business associates should conduct an audit of any tracking technologies used on their websites, web applications, or mobile apps and determine if they are being used in a manner that complies with HIPAA. Such tracking technologies should be included in a HIPAA risk analysis and risk management process.

Covered entities should review tracking technology vendor agreements and ensure a business associate agreement is in place to avoid potential impermissible disclosure of private health information.

If through an audit it is found that tracking technologies are being used in a manner not compliant with HIPAA, notification may be required under HIPAA and applicable state law.

If you have questions about HIPAA compliance or related issues contact a Jackson Lewis attorney to discuss.

It usually happens after a reported data breach. The organization experiencing the breach sends notifications to affected individuals, as well as federal and or state agencies where appropriate and perhaps other parties. Not long thereafter, the organization receives an inquiry from one or more government agencies. These inquiries typically seek more information about the breach and its incident response process, but also the nature and extent of the organization’s data security policies and procedures in place prior to the breach. Deficiencies in any of these areas could support getting “whacked”!

On December 16, Pennsylvania’s Attorney General and soon to be Governor, Josh Shapiro, announced a settlement with a company that experienced a data incident in April 2021 that exposed 30,295 Pennsylvania consumers’ payment card information. Following an investigation jointly conducted by Mr. Shapiro’s office and its counterpart in New York, it was determined that the company “failed to properly employ reasonable data security measures in protecting consumers’ payment card information.”  The forensic investigation revealed that in December 2020, an unknown hacker exploited a vulnerability in the company’s web servers that allowed them to steal customers’ payment card information and other personal information.

The company has agreed to pay $100,000 each to both the Pennsylvania and New York Attorneys General Offices. It also agreed to implement several security policies designed to protect consumer personal information including: (i) designating an employee to coordinate and supervise its information security program; (ii) conducting annual security risk assessments of its networks; and (iii) conducting annual employee training.

Businesses are increasingly facing a multitude of data privacy and security frameworks. Healthcare providers, for example, have to consider the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and its privacy and security regulations, as well as more stringent state regulation. Of course, healthcare providers that process credit or debit cards for payment, also will have to consider the applicable provisions of the Payment Card Industry Data Security Standard (“PCI DSS”). A restaurant in New York will very likely need to consider the PCI DSS rules, but also the NY SHIELD Act when it considers what safeguards it needs to protect personal information. An insurance broker or agent also may have several frameworks to evaluate, such as HIPAA, if it is a business associate, and depending on the state of operation, that state’s version of the NAIC Data Security Model Law (nearly half of the states have adopted a version of this law).

In connection with his office’s announcement, AG Shapiro stated,

“Every corporation that does business in Pennsylvania needs to stay alert and protect their customer’s personal data or they will have to answer to my office in court.”

Pennsylvania Senior Deputy Attorney General Tim Murphy shared the announcement of the settlement on LinkedIn, noting,

Here is another data breach settlement following a joint investigation with my friends from the NY Office of Attorney General. My colleagues in the cyber community (especially insurers) should take note that some AG offices are going to keep whacking companies who lack basic components of an information security program.

Data breaches are difficult if not impossible to prevent in all cases, even when significant efforts are made to prevent them. When a breach happens, organizations should be prepared to respond, but also be positioned to avoid getting “whacked” should federal and/or state agency investigations follow. They can bolster their position in those cases with a strong compliance program that includes, among several other things, becoming more aware of their compliance requirements; conducting risk assessments; shaping their policies and procedures based on those assessments; documenting their processes, policies, procedures; and training employees.

Happy New Year and good luck in 2023!

On December 22, 2022, the Nevada Gaming Commission (NGC) adopted regulations creating new cybersecurity requirements for certain gaming operators. This action joins agencies in other jurisdictions moving quickly to protect consumers and their personal information in the gaming industry. The NGC adopted the October 17, 2022 version of the regulations, which become effective January 1, 2023.

Below is a summary of the new rules:

In general.

  • Gaming operators must take “all appropriate steps to secure and protect their information systems from the ongoing threat of cyber attacks,” including satisfying the requirements of chapter 603A of Nevada Revised Statutes (NRS).
  • The obligations apply to the operators’ own information, as well as the “personal information” of their patrons and employees as defined in NRS 603A.040.
  • In general, the rules apply to certain covered entities – those that hold:
    • a nonrestricted license as defined in NRS 463.0177 who deal, operate, carry on, conduct, maintain, or expose for play any game defined in NRS 463.0152 (e.g., games played with cards, dice, equipment or any mechanical or electronic device or machine such as monte, roulette, keno, bingo, blackjack, poker, baccarat, or slot machine
    • a gaming license that allows for the operation of a race book, a sports pool; or permits the operation of interactive gaming.
  • Covered entities must document in writing all procedures taken to comply with this section and the results thereof, and must maintain all such records for a minimum of five years from the date they are created. Such records must be provided to the Nevada Gaming Control Board (Board) upon request. 

Risk assessment and adoption of cybersecurity best practices.

  • Covered entities must conduct an initial risk assessment and develop the cybersecurity best practices they deems appropriate. Examples of such best practices include, without limitation, CIS Version 8, COBIT 5, ISO/IEC 27001, and NIST SP 800-53.
  • After the initial risk assessment, covered entities must continue to monitor cybersecurity risks to their business and make appropriate modifications.  
  • For the initial assessment and ongoing monitoring, covered entities may use affiliated entities or third parties with appropriate expertise in cybersecurity.
  • Covered entities have until December 31, 2023, to fully comply with these assessment and best practice requirements.

Incident response.

  • Provide written notice to the Board as soon as practicable but no later than 72 hours after becoming aware of a cyber attack to the covered entity’s information system resulting in a material loss of control, compromise, unauthorized disclosure of data or information, or any other similar occurrence.
  • A “cyber attack” means any act or attempt to gain unauthorized access to an information system for purpose of disrupting, disabling, destroying, or controlling the system or destroying or gaining access to the information contained therein. Notably, under these regulations, a cyber attack is not solely an incident resulting in unauthorized access or acquisition of personal information.
  • Covered entities must investigate the cyber attack (or engage a third party to do so), prepare a report documenting the results of the investigation, inform the Board the report is completed, and provide a copy to the Board upon request. Reports must include, without limit, the root cause of the cyber attack, the extent of the cyber attack, and any actions taken or planned to be taken to prevent similar events in the future. Many such investigations are performed at the direction of counsel and designed to be privileged. Covered entities need to think carefully about how they structure their investigations and related activities.

Additional requirements for Group I licensees under subsection 8 of regulation 6.010.

  • Designate a qualified individual to be responsible for developing, implementing, overseeing, and enforcing the covered entity’s cybersecurity best practices and procedures described above.
  • Perform at least annually observations, examinations, and inquiries of employees to verify compliance with cybersecurity best practices. The annual review may be performed by internal auditors or independent third parties entity with expertise in cybersecurity. Documents prepared by the internal auditor must be retained as described above.
  • Engage an independent accountant or other independent entity with cybersecurity expertise at least annually to (i) perform an independent review of the covered entity’s best practices and procedures and (ii) attest in writing that those practices and procedures comply with the requirements of Section 5.260 Cybersecurity of the NGC’s Regulations. The covered entity must retain the written attestation and any related documents as described above.

Gaming is not the only industry seeing a strengthening of regulations concerning privacy and cybersecurity. A few years ago, for example, we discussed an uptick in state regulation of the insurance industry with several states adopting the NAIC’s Model Security Law. Today there are over 20 states that have adopted the NAIC model law. Finance, healthcare, professional services, etc. all are seeing an uptick in industry-specific regulation, which shows no sign of slowing.

test

As the year comes to a close here are some of the highlights from the Workplace Privacy, Data Management & Security Report with our Top 10 most popular posts of 2022:

1. California Consumer Privacy Act FAQs: Employment Information

As the California Privacy Rights Act moves toward taking effect and exceptions applying to employment-related data expire, employers have questions about handling privacy when it comes to employee information.

2. “Get a Life” – Another Dentist Responds to Patient’s Online Review, This Time Faces a $50,000 OCR Penalty

The Office for Civil Rights (OCR) recently announced four enforcement actions, one against a small dental practice that imposed a $50,000 civil monetary penalty under HIPAA. The OCR alleged the dentist impermissibly disclosed a patient’s protected health information (PHI) when the dentist responded to a patient’s negative online review. 

3. California Tightens Rules on Vehicle Tracking, Fleet Management

In September 2022, Governor Gavin Newsom signed into law AB-984, which becomes effective January 1, 2023. The law builds on other privacy protections in California, such as the California Consumer Privacy Act and Penal Code Sec. 637.7. Section 637.7 prohibits using an electronic tracking device to determine the location or movement of a person; however, it does not apply when the vehicle owner (e.g., the employer) has consented to the use of the device.

4. Does Your Cyber Insurance Policy Look More Like Health Insurance?

Many factors are driving up the cost of cyber insurance policies including increases in ransomware attacks and the cost of business interruption from those attacks. Moreover, carriers are giving more scrutiny to the practices and procedures of the companies they insure. As such, companies need to consider their cyber security controls to assist in obtaining and maintaining coverage.

5. $600,000 Reasons To Review Your SHIELD Act Compliance Program: NY Attorney General Announces Significant Settlement Stemming From Email Data Breach

On January 24, 2022, New York Attorney General Letitia James announced a $600,000 settlement agreement with EyeMed Vision Care, a vision benefits company, stemming from a 2020 data breach compromising the personal information of approximately 2.1 million individuals across the United States, including nearly 99,000 in New York State

6. The RIPTA Data Breach May Provide Valuable Lessons About Data Collection and Retention

There is a basic principle of data protection that when applied across an organization can significantly reduce the impact of a data incident – the minimum necessary principle. A data breach reported late last year by the Rhode Island Public Transit Authority (RIPTA) highlights the importance of this relatively simple but effective tool.

7. From Time Keeping to Dashcams, BIPA Litigation Continues

Litigation under the Illinois Biometric Information Privacy Act (BIPA) continues to heat up, encompassing litigation about timekeeping systems that use fingerprints to dashcams.

8. Utah Becomes Fourth State to Enact A Comprehensive Privacy Law

Utah joined California, Colorado, and Virginia in passing a consumer privacy statute, the Utah Consumer Privacy Act takes effect on December 31, 2023.

9. Does a Poor ESG, Social Responsibility Rating Increase an Organization’s Cyber Risk?

With ransomware and other cyber threats top of mind for most in the c-suite these days, a question frequently raised is whether a particular organization is a target for hackers. Of course, nowadays, any organization is at risk of an attack, but the question is whether some organizations are targeted more than others. An Insurance Journal article discusses a paper published in September 2021 that identifies a factor that could elevate the risk of being targeted, a factor many in cyber might not have expected, “greenwashing.”

10. Connecticut Likely to Become Fifth State to Enact Comprehensive Consumer Privacy Law

Connecticut prepared and eventually passed the “Act Concerning Personal Data Privacy and Online Monitoring” Act which will take effect July 1, 2023.

Jackson Lewis will continue to track information related to privacy regulations and related issues. For additional information on these topics, please reach out to a member of our Privacy, Data, and Cybersecurity practice group.

In a recent opinion, Henderson v. The Source for Public Data, L.P., et al, the U.S. Court of Appeals for the 4th Circuit considered whether Section 230(c)(1) of the Communications Decency Act (CDA) – a federal law that allows social media websites to provide a forum for users to post videos or other information without holding the owner of the website responsible for the content of the uploaded material – likewise shielding online aggregators of public records from liability as a Consumer Reporting Agency under the Fair Credit Reporting Act (FCRA).   Disagreeing with the District Court, the Court of Appeals held that Section 230 did not apply because the online aggregator was an “information content provider that provided the improper information” and not merely providing a forum for its users to upload information. 

In Henderson, defendants were in the business of gathering publicly available information including criminal and civil records, voting records, driving information, and professional licensing, aggregating the information, and selling it to third parties. Defendants acknowledged the data they sold was used to determine an individual’s creditworthiness and perform background checks for employment purposes. The plaintiffs, job seekers who had background checks done on them by the online aggregators, filed claims under the FCRA, asserting that the online aggregators were producing “consumer reports” but not complying with the technical provisions of the FCRA, such as providing the plaintiffs with copies of their “consumer reports” upon request.

At the district court level, defendants sought to dispose of claims alleging that they were protected by Section 230 of the CDA. The district court agreed and granted the defendants’ dispositive motion.

On appeal, the 4th Circuit held that the activities of the online aggregators did not fall within the scope of protection provided by Section 230. The panel held that the defendants contributed in a material way to what made the online content inaccurate.  The panel opinion stated that the defendants made substantive changes to the records’ content that materially contributed to the records’ unlawfulness, making the defendants a content provider for the information meaning they are not entitled to protection under Section 230.

This opinion will likely have an impact on whether FCRA defendants can rely on Section 230, in whole or in part, as a source of immunity from FCRA claims.  More so, this ruling will influence the ongoing CDA reform debate, as legislators who already have reservations about the scope of CDA protection may look askance at the Henderson ruling and seek to add the FCRA as a statutory exemption to the CDA in a future reform bill.  Either way, this is an area that is developing and worth watching closely.  

If you have questions about FCRA compliance or related issues, contact the authors of this article or the Jackson Lewis attorney with whom you regularly work.

On December 16, 2022, the California Privacy Protection Agency (CPPA) had its final meeting before the California Privacy Rights Act (CPRA) which amended the California Consumer Privacy Act takes effect on January 1, 2023. Despite the CPRA taking effect at the start of the year, the CPPA, the agency charged with implementing the law, has not finalized its rulemaking process. It was discussed at the Friday meeting that the final proposed rules are anticipated to be released at the end of January and after going through the various administrative requirements will take effect in April. In the meantime, regulations previously promulgated by the California Attorney General’s Office will remain in effect.

Though it has not finalized its CPRA rulemaking, the CPPA is setting its sights on other rulemaking duties, including the use of artificial intelligence in data collection and businesses’ cybersecurity assessments. The CPPA released sample questions covering these areas which will be finalized and approved in the new year and then released for a comment period in order to collect insights on the framework needed for risk assessments and automated decision-making.

Some of the considerations pertaining to risk assessments that are detailed in the sample questions include laws and other requirements that businesses already have to comply with regarding processing consumers’ personal information that require risk assessments and how those assessments can be aligned with the requirements under the CPRA. Further, the CPPA is considering whether assessments from other privacy statutes and regulations such as the European General Data Protection Regulation and Colorado’s Privacy Act can be used for CPRA purposes.

Similarly, in considering rulemaking regarding automated decision-making, the CPPA is considering questions of other laws requiring access and/or opt-out rights in the context of automated decision-making. The sample questions also seek information about how prevalent algorithmic discrimination based on classification/classes under California and federal law is and if it is more pronounced in some sectors.

Jackson Lewis will continue to track information related to privacy regulations and related issues. For additional information on the CPRA, please reach out to a member of our Privacy, Data, and Cybersecurity practice group.

On January 1, 2023, Virginia’s Consumer Data Protection Act (CPDA) takes effect. Key features of the CPDA include expansive consumer privacy rights (right to access, right of rectification, right to delete, right to opt-out, right of portability, right against automatic decision making), a broad definition of “personal information”, the inclusion of a “sensitive data” category, and data protection assessment obligations for data controllers.

However, the CDPA is not the only privacy and data protection legislation in the Commonwealth. The following are some of the other laws to consider when working on privacy and data protection policies in the state.

Personal Information Privacy Act

This law which predates the CPDA restricts the sale of personal information of customers by merchants as well as the use of social security numbers. For example, with regard to the limitations on the use of social security numbers, a person shall not:

1. Intentionally communicate another individual’s social security number to the general public;

2. Print an individual’s social security number on any card required for the individual to access or receive products or services provided by the person;

3. Require an individual to use his social security number to access an Internet website, unless a password, unique personal identification number, or other authentication device is also required to access the site; or

4. Send or cause to be sent or delivered any letter, envelope, or package that displays a social security number on the face of the mailing envelope or package, or from which a social security number is visible, whether on the outside or inside of the mailing envelope or package.

Insurance Data Security Act

Effective July 1, 2020, Virginia adopted legislation establishing data security requirements applicable to persons licensed by the insurance laws of the Commonwealth. Following several other state laws that have created data security regimes applicable to the insurance industry, the law requires licensees to maintain the security of information systems and nonpublic information. The law also requires licensees to investigate cybersecurity events and to notify individuals and the Commissioner of Insurance. More recently, regulations have been approved effective June 1, 2021. Those regulations provide (i) rules for reporting cybersecurity events; (ii) risk assessment requirements that must be implemented by July 1, 2022; and (iii) additional security measures that must be implemented by July 1, 2022.

Data Breach Notification Law

Since July 2008, Virginia law has required entities doing business in Virginia and state agencies to notify individuals of a breach of their computerized, unredacted, and unencrypted personal information. Under the law, notice is required only if the breach causes, or it is reasonably believed that it has or will cause, identity theft or other fraud to a resident of the Commonwealth.

Similar to the data breach notification laws in other states, such as Massachusetts and New Hampshire, the notification must be provided to the Virginia Attorney General, as well as the affected residents. Also, if more than 1,000 persons would have to be notified at one time, the business would have to notify the Virginia Attorney General and all consumer reporting agencies of the timing, distribution, and content of the notice. Violations of this statute are enforced by the Attorney General, who may seek up to $150,000 in penalties per breach. Individuals also may recover direct economic damages from a violation.

If you have questions about developing a privacy and data compliance plan for Virginia law or related issues please reach out to a member of our Privacy, Data, and Cybersecurity practice group.