Governor Kathy Hochul signed several bills last month designed to strengthen protections for the personal data of consumers. One of those bills (S2659B) makes important changes to the notification timing requirements under the Empire State’s breach notification law, Section 899-aa of the New York General Business Law. The bill was effective immediately when signed, or December 21, 2024.

All fifty states have enacted at least one data breach notification law. Some states, such as California, have more than one statute – a generally applicable statute and one applying to certain health care entities. Over the years, many of these states have updated their laws in different respects. For example, some have expanded the definition of personal information, resulting in broader categories of personal information triggering a potential notification requirement if breached. Others have added requirements to notify one or more state agency. While some states have modified the specific notification requirements, such as the timing of notification. That is one of the changes New York made to its law.

Prior to the change, a business subject to the New York statute that experienced a covered breach would be required to provide notification to affected individuals:

in the most expedient time possible and without unreasonable delay.

There was no outside time frame by which the notice must be provided. The bill added a 30 day deadline. So, now, the law requires the breached entity to provide notification

in the most expedient time possible and without unreasonable delay, provided that such notification shall be made within thirty days after the breach has been discovered

Notably, prior to the change, the law excluded from this timing requirement the legitimate needs of law enforcement and “any measures necessary to determine the scope of the breach and restore the integrity of the systems.” The legitimate needs of law enforcement exception remains in the law, determining the scope of the breach and restoring system integrity do not.

S2659B also made a change to the state agencies that must be notified in the event of a breach under the statute. Under the prior law, if any New York residents were to be notified under the State’s breach notification law, the state attorney general, the department of state and, the division of state police all needed to be notified. The new law adds the Department of Financial Services to the list.

With breach notification requirements under federal law, the laws in all states and several localities, and increasingly embedded in contract obligations, it can be difficult stay up to date, particularly if the company is in the middle of handling the breach. In addition to it being required in some scenarios, this is one more reason why we recommend maintaining an incident response plan. Such a plan is a good place to track these kinds of developments for the company’s incident response team.

As the healthcare sector continues to be a top target for cyber criminals, the Office for Civil Rights (OCR) issued proposed updates to the HIPAA Security Rule (scheduled to be published in the Federal Register January 6). It looks like substantial changes are in store for covered entities and business associates alike, including healthcare providers, health plans, and their business associates.

According to the OCR, cyberattacks against the U.S. health care and public health sectors continue to grow and threaten the provision of health care, the payment for health care, and the privacy of patients and others. In 2023, the OCR has reported that over 167 million people were affected by large breaches of health information, a 1002% increase from 2018. Further, seventy nine percent of the large breaches reported to the OCR in 2023 were caused by hacking. Since 2019, large breaches caused by successful hacking and ransomware attacks have increased 89% and 102%.

The proposed Security Rule changes are numerous and include some of the following items:

  • All Security Rule policies, procedures, plans, and analyses will need to be in writing.
  • Create, maintain a technology asset inventory and network map that illustrates the movement of ePHI throughout the regulated entity’s information systems on an ongoing basis, but at least once every 12 months.
  • More specificity needed for risk analysis. For example, risk assessments must be in writing and include action items such as identification of all reasonably anticipated threats to ePHI confidentiality, integrity, and availability and potential vulnerabilities to information systems.
  • 24 hour notice to regulated entities when a workforce member’s access to ePHI or certain information systems is changed or terminated.
  • Stronger incident response procedures, including: (I) written procedures to restore the loss of certain relevant information systems and data within 72 hours, (II) written security incident response plans and procedures, including testing and revising plans.
  • Conduct compliance audit every 12 months.
  • Business associates to verify Security Rule compliance to covered entities by a subject matter expert at least once every 12 months.
  • Require encryption of ePHI at rest and in transit, with limited exceptions.
  • New express requirements would include: (I) deploying anti-malware protection, and (II) removing extraneous software from relevant electronic information systems.
  • Require the use of multi-factor authentication, with limited exceptions.
  • Require review and testing of the effectiveness of certain security measures at least once every 12 months.
  • Business associates to notify covered entities upon activation of their contingency plans without unreasonable delay, but no later than 24 hours after activation.
  • Group health plans must include in plan documents certain requirements for plan sponsors: comply with the Security Rule; ensure that any agent to whom they provide ePHI agrees to implement the administrative, physical, and technical safeguards of the Security Rule; and notify their group health plans upon activation of their contingency plans without unreasonable delay, but no later than 24 hours after activation.

After reviewing the proposed changes, concerned stakeholders may submit comments to OCR for consideration within 60 days after January 6, by following the instructions outlined in the proposed rule. We support clients with respect to developing and submitting comments they wish to communicate to help shape the final rule, as well as complying with the requirements under the rule once made final.

As the year comes to a close here are some of the highlights from the Workplace Privacy, Data Management & Security Report with our most popular topics and posts from 2024.

Expanding State Privacy Laws

This year saw a further expansion of state comprehensive consumer data privacy laws. These legislative measures aim to enhance the protection of consumer data, ensuring greater transparency and accountability for businesses that collect and process personal information. Several states introduced robust frameworks designed to safeguard consumer privacy. Whether you are an attorney, an executive, or a leader in human resources, marketing, operations, risk management, and of course IT, it is vital to stay informed about these evolving legal standards and their implications for both businesses and consumers.

Read more on these developments:

Bluegrass State Becomes Third State to Pass a Comprehensive Consumer Privacy Data Law in 2024

Maryland Passes Comprehensive Data Privacy Law, Joining the Swelling State Ranks

Minnesota Passes a Comprehensive Consumer Data Privacy Law

Nebraska Adds to the List of States That Have Enacted a Comprehensive Consumer Data Privacy Law

New Hampshire Passes Comprehensive Consumer Data Privacy Law

New Jersey Legislature Enacts the First Consumer Privacy Law of 2024

Rhode Island Passes a Comprehensive Consumer Data Privacy Law

Growing AI Regulation

In 2024, the landscape of artificial intelligence (AI) regulation experienced significant changes, reflecting the rapid advancements and widespread adoption of AI technologies across various industries. Regulators have increasingly focused on addressing the ethical, legal, and privacy implications of AI, leading to new laws and amendments aimed at safeguarding individuals’ rights and ensuring transparency in AI deployment. One example at the federal level is the use of AI when conducting background checks and potential Fair Credit Reporting Act (FCRA) implications. A notable example at the state level is Illinois which made significant amendments to its Human Rights Act, setting a precedent for other states by incorporating specific provisions related to AI.

Read more about these developments:

AI Regulation Continues to Grow as Illinois Amends its Human Rights Act

AI Notetakers – Evaluating the Risks Along with the Benefits

3 Key Risks When Using AI for Performance Management and Ways to Mitigate Them

AI and Other Decision-Making Tools: Does the Fair Credit Reporting Act Apply?

Data Breach Risks Escalate

Businesses faced significant regulatory and legislative developments pertaining to data breaches in 2024, reflecting the growing need to protect sensitive information in an increasingly digital world. Key updates include the strengthening of breach notification requirements by multiple states, such as Utah, and the emphasis on multi-factor authentication to prevent unauthorized access. The rising scrutiny and evolving legal landscape underscore the necessity for businesses to implement robust cybersecurity measures and comply with updated data breach notification laws to mitigate risks and avoid severe penalties.

Read more about these developments:

Utah Updates to Breach Notification Requirements Take Effect

Multi-factor Authentication (MFA) Bypassed to Permit Data Breach

Website Tracking Concerns for Business

In 2024, the scrutiny surrounding website tracking technologies has intensified significantly. It has become critical for businesses to understand the evolving legal landscape of online tracking practices. Increased regulatory pressure and new legislative measures across different states have highlighted the need for businesses to implement robust privacy policies. These policies must comply not only with state-specific regulations but also with broader federal guidelines, ensuring the protection of consumer data and transparency in data collection. Moreover, recent guidance from the New York Attorney General and other regulatory bodies has emphasized that non-compliance can lead to severe penalties, making it imperative for online retailers and all businesses employing website tracking technologies to stay abreast of the latest legal requirements and best practices.

Read more about these developments:

California Invasion of Privacy Act Violations Aimed at Online Retailers

The Spotlight Shines Even Brighter: New York Attorney General Publishes Guidance On Businesses’ Use Of Website Tracking Technologies

Litigation Under Wiretap Law and What Website Owners Need to Know

Administrative Guidance on Cybersecurity

This year several administrative agencies issued guidance on cybersecurity, emphasizing the critical importance of protecting sensitive data and ensuring robust security measures across various sectors. This year, the Department of Labor (DOL) expanded fiduciary obligations to include cybersecurity for health and welfare plans, reflecting a growing recognition of the vulnerabilities and risks associated with inadequate cybersecurity practices. When plan fiduciaries set out to assess their plan service providers, they might consider amendments the Securities and Exchange Commission (SEC) made in 2024 to Regulation S-P which regulates many of those same service providers. If the service provider is subject to S-P, confirming they comply with the SEC requirements for an incident response plan and other cybersecurity policy and procedure requirements, would help the fiduciaries satisfy their obligation to make prudent selections.

Read more about these developments:

DOL Expands Fiduciary Obligations for Cybersecurity to Health and Welfare Plans

Why Retirement Plan Sponsors and Fiduciaries Need to Know about the SEC Cybersecurity Amendments

The Broadening Data Security Mandate: SEC Incident Response Plan and Data Breach Notification Requirements

Jackson Lewis will continue to track important developments in privacy, data management, and cybersecurity in the new year. If you have questions about these or other related issues, contact a Jackson Lewis attorney to discuss.

Around the country, the weather is turning wintery, but in the privacy arena, there will be a blizzard as five state comprehensive privacy laws become effective.

Here is an overview of businesses needing to prepare.

1. Delaware Personal Data Privacy Act (DPDPA)

The DPDPA takes effect on January 1, 2025. It applies to entities doing business in Delaware or targeting Delaware residents. It covers businesses that process the personal data of at least 35,000 consumers or derive significant revenue from selling personal data. Notably, nonprofits are not exempt, and the law includes stringent requirements for handling sensitive personal information.

2. Iowa Consumer Data Protection Act (ICDPA)

The ICDPA also takes effect on January 1, 2025.  It is more business-friendly, with a high threshold for applicability. It targets businesses that control or process data of at least 100,000 Iowan consumers or derive over 50% of their revenue from selling personal data. The ICDPA offers a generous 90-day cure period for violations.

3. Nebraska Data Privacy Act (NDPA)

The NDPA takes effect on January 1, 2025. The NDPA applies broadly to entities conducting business in Nebraska, with few exemptions. Small businesses are exempt from most provisions but must obtain opt-in consent before selling sensitive information. The law includes a 30-day cure period for violations.

4. New Hampshire Data Privacy Act (NHDPA)

The NHDPA takes effect on January 1, 2025. New Hampshire’s NHDPA focuses on consumer rights and data protection, requiring businesses to implement robust data security measures and provide clear privacy notices. It also grants consumers the right to access, correct, and delete their personal data.

5. New Jersey Data Privacy Act (NJDPA)

The NJDPA takes effect on January 15, 2025. The NJDPA introduces comprehensive data protection requirements, including mandatory data protection assessments and the obligation to recognize universal opt-out mechanisms. It aims to enhance transparency and consumer control over personal data.

How to Prepare for the Blizzard

With these new laws, businesses must take proactive steps to ensure compliance. Here are some key actions to consider:

  • Assess Application of the Law: Determine whether each law applies to your business.
  • Conduct Data Audits: Identify and categorize the personal data you process to understand your obligations under each law.
  • Update Privacy Policies: Ensure your privacy policies are transparent and reflect the new legal requirements.
  • Implement Data Security Measures: Strengthen your data protection practices to safeguard consumer information.
  • Service Provider Agreements: Review and update as necessary service provider agreements with those vendors that process personal information on behalf of the business.
  • Consumer Rights Readiness: Be prepared to comply with requests from consumers concerning their privacy rights, such as rights to opt-out of sale or deletion of personal information.
  • Train Employees: Educate your staff about the new laws and their roles in maintaining compliance.

If you have questions about compliance with the laws taking effect in January or related issues please reach out to a member of our Privacy, Data, and Cybersecurity practice group to discuss.

The Consumer Financial Protection Board (CFPB) recently issued guidances titled  Consumer Financial Protection Circular 2024-06: Background Dossiers and Algorithmic Scores for Hiring, Promotion, and Other Employment Decisions | Consumer Financial Protection Bureau and Consumer Financial Protection Circular 2023-03: Adverse action notification requirements and the proper use of the CFPB’s sample forms provided in Regulation B | Consumer Financial Protection Bureau. The guidances remind employers that certain decision-making tools and resources including tracking devices and AI algorithmic services used for employment purposes could create Fair Credit Reporting Act (FCRA) compliance issues.

Typically, FCRA applies when employers request a consumer report from a third-party consumer reporting agency (CRA) for an employment purpose. Actions considered an employment purpose have been broadly construed and can include hiring, promotions, disciplinary action, reassignment, and retention. Consumer reporting agencies are the companies that assemble or evaluate outside or public consumer data for a fee to furnish such reports.

FCRA may cover a broad spectrum of reports produced by a consumer reporting agency. An employer now needs to understand whether FCRA applies, given a variety of considerations including the origination of the data for the decision-making tool.

First, an easy example – an employer may decide to conduct a lawful credit check on a job candidate – this action typically implicates FCRA because credit checks can only be obtained from a CRA source.

On the other end of the spectrum, an employer may use third-party software to prepare reports summarizing employees’ sales performance. The third-party software merely summarizes internal company sales data into report form so the employer can more easily evaluate employee performance. In this case, the vendor is not a consumer reporting agency. While the employer is using the report for an employment purpose, the FCRA explicitly excludes “report[s] containing information solely as to transactions or experiences between the consumer and the person making the report.” Sales data solely from the employer likely fits under this exclusion.

Software that utilizes generative AI makes the question of the applicability of FCRA murkier. For example, some employers are turning to software vendors that utilize AI algorithms to generate employee performance reviews. Without careful consideration, employers may unknowingly implicate FCRA  through their purchase and use of such software. Some of these considerations include:

  • Data Sources: Employers must consider all the sources of data used to train the vendor’s AI tool, even if the data used to generate each individualized report only originates from the respective employee (or some or all of the employer’s other employees). For instance, if the AI tool generates a sales performance report based on the employee’s sales in comparison to or in consideration of public consumer data or public information about the employee, the vendor could be considered a consumer reporting agency.
  • Vendor’s Intent: It is also important to consider whether the employer alters the performance review prior to delivery to the employee. A vendor is not a consumer reporting agency if it does not intend for its reports to be used as “consumer reports.” Kidd v. Thompson Reuters Corp., 925 F. 3d 99 (2d Cir. 2019). As such, tools that merely assist with report drafting, and only present draft reports intended for the employer to supplement and edit, may not fit under this broad definition of consumer reporting agency.
  • Other Considerations: Utilizing software that incorporates AI may also implicate various state and federal laws. For instance, Colorado recently passed the AI Act, which creates a duty of reasonable care for deployers of “high-risk” AI systems to avoid algorithmic discrimination against Colorado residents. Colorado Enacts Artificial Intelligence Legislation Affecting AI Systems Developers, Deployers – Jackson Lewis

The CFPB’s guidances should act as a reminder to employers that FCRA coverage is just one more area to explore before purchasing from a vendor a decision-making tool or using AI for any employment-related purpose.   If you have any questions, please speak with the JL attorney you regularly work with.

A healthcare provider delivering pain management services in Florida and other states faces a $1.19 million civil monetary penalty from the U.S. Department of Health and Human Services (HHS), Office for Civil Rights (OCR). The OCR investigation stems from a data breach, but not the type of breach we are used to seeing in the news – it was not a ransomware attack, business email compromise, or some other type of attack by an unknown hacker. Similar to many other OCR enforcement actions, however, a lack of basic safeguards under the Security Rule drove the penalty.

According to the OCR:

  • On May 3, 2018, the covered entity retained an independent contractor to provide business consulting services.
  • The contractor’s services ceased in August of 2018.
  • On February 20, 2019, the covered entity discovered that on three occasions, between September 7, 2018, and February 3, 2019, the contractor impermissibly accessed the provider’s electronic medical record (EMR) system and accessed the electronic protected health information (ePHI) of approximately 34,310 individuals. The contractor used that information to generate approximately 6,500 false Medicare claims.
  • On February 21, 2019, the covered entity terminated the independent contractor’s access to its systems, and in early April of that same year filed a breach report with OCR. The report described that the compromised PHI included names, addresses, phone numbers, email addresses, dates of birth, Social Security numbers, chart numbers, insurance information, and primary care information.

Evidently, the contractor continued to have access to the covered entity’s information systems for 6 months following the point at which services ended, according to the OCR.

“Current and former workforce can present threats to health care privacy and security—risking continuity of care and trust in our health care system,” said OCR Director Melanie Fontes Rainer. “Effective cybersecurity and compliance with the HIPAA Security Rule means being proactive in reviewing who has access to health information and responding quickly to suspected security incidents.” 

The OCR commenced an investigation and reported findings that the covered entity:

  • did not conduct a thorough and accurate risk analysis prior to the breach incident, or until September 30, 2022, more than three years after the incident,
  • had not implemented policies and procedures to regularly review records of information system activity containing ePHI,
  • did not implement termination procedures designed to remove access to ePHI for workforce members who had separated, and
  • did not implement policies and procedures addressing access to workstations.

It is worth noting that the $1.19 million penalty comes after a reduction for “Recognized Security Practices.” Recall that following an amendment enacted in 2022, the HITECH Act now requires the OCR to take into account Recognized Security Practices in connection with certain enforcement and audit activities under the HIPAA Security Rule. In short, if a covered entity can demonstrate Recognized Security Practices as being in place continuously for the 12 months prior to a security incident, a reduction in the amount of civil monetary may be warranted.

In this case, OCR provided the covered entity an opportunity to adequately demonstrate that it had RSPs in place. The covered entity did and OCR applied a reduction to the penalty.

Regulated entities, including healthcare providers, often cite to “controls” they have in place, believing they are sufficient to address their compliance obligations. This application of the rule for Recognized Security Practices is a good example of why that is not the case. That is, while it is important to maintain good controls, those efforts still need to be measured against the applicable compliance requirements, such as set forth under the HIPAA Security Rule.

No organization can eliminate data breach risks altogether, regardless of industry, size, or even if the organization has taken significant steps to safeguard their systems and train employees to avoid phishing attacks. Perhaps the most significant reason these risks remain: third-party service providers or vendors.

For most businesses, particularly small to medium-sized businesses, service providers play a critical role helping to manage and grow their customers’ businesses.

Consider vacation rental and property management businesses. Whether operating an active website, maintaining online reservation and property management platforms, or recruiting and managing a growing workforce, these businesses wind up collecting, processing, and storing large amounts of personal information.

With a national occupancy rate of approximately 56%, a vacation rental company with 100 units for weekly rental might expect to collect personal information from about 5,000 individuals annually (25 weeks rented X2 persons per rental X100 properties). My crude math leaves out website visitors, cancellations, employees (and their family members), and other factors. After three years in business, the company might easily be storing personal information of up to 15-20,000 individuals in their systems.

There are lots of good resources online helping to protect VR businesses from online scams, including those that could lead to a data breach. “Vacation Rental Scams: 20 Red Flags for Spotting Hoax Guests” and “How to Protect Your Vacation Rental from Phishing Attacks” by Lodgify are good examples.

But what happens when the VR business’ guest and/or employee data is breached while in the possession of a vendor?

Last year, as reported on the Maine Attorney General’s Office website, Resort Data Processing (RDP) experienced a data breach affecting over 60,000 individuals caused by a “SQL injection vulnerability which allowed an unauthorized third party to redirect payment card information from in-process transactions on our RDP’s clients’ on-premises Internet Reservation Module (“IRM”) server.” Affected individuals likely included consumers who stayed at properties owned by RDP’s business customers. At least, that is what one plaintiffs law firm advertised about the incident.

Addressing this risk can be daunting, especially for small businesses that may feel as though they have insufficient bargaining power to influence contract terms with their vendors. But there are several strategies these organizations might consider to strengthen their position and minimize compliance and litigation risks.

  • Identify all third parties that collect, access, or maintain personal information on behalf of the business.
  • Investigate what personal information they access, collect, and maintain and assess how to minimize that information.
  • Make cybersecurity a part of the procurement process. Don’t be afraid ask pointed questions and seek evidence of the vendor’s privacy and cybersecurity policies and procedures. This should be part of value proposition the vendor brings to the table.
  • Review service agreements to see what changes might be possible to protect the company.
  • A vendor still may have a breach, so plan for it. Remember, the affected data may be owned by the business and not the vendor, making the business responsible for notification and related obligations. The business may be able to assign those obligations to the vendor, but it likely will be the business’ responsibility to ensure the incident response steps taken by the vendor are compliant.

Experienced and effective counsel can be instrumental here, both with negotiating stronger terms in service agreements and improving preparedness in the event of a data breach.

Massachusetts’ highest court recently issued an opinion that delves into the complex intersection of privacy law and modern technology. The case centers around whether the collection and transmission of users’ web browsing activities to third parties without their consent constitutes a violation of the Massachusetts Wiretap Act.

However, the claim is not unique to Massachusetts. In recent years, plaintiffs in California, Pennsylvania, and Florida have filed claims under state-specific statutes and the Federal Wiretap Act alleging violations when data is collected and shared without consent of the individual website visitor. 

The Massachusetts Court’s analysis hinged on the definitions of “communication” and “interception” under the state wiretap act. The term “communication” presented a particular challenge, as the Court found it ambiguous in the context of web browsing activities. The Court ultimately concluded that web browsing activities do not clearly fall under the statutory definition of “communication.”

In examining the legislative history of the wiretap act, the Court noted that it was primarily concerned with the secret interception of person-to-person conversations and messaging, rather than interactions with a website. This historical perspective further supported the Court’s decision to rule in favor of the website owners.

As a result, the Court reversed the lower court’s denial of defendants’ motions to dismiss the case, concluding that the alleged conduct did not fall under the wiretap act’s purview. While some states have begun to dismiss claims under wiretap laws similar to Massachusetts, it is likely not the end of attempts to bring claims for website tracking.  This is especially true in states where attempts to dismiss claims under wiretap laws have been regularly denied.  Similarly, as other privacy laws propagate plaintiffs’ counsel will have new rules to attempt to bring claims.

Notwithstanding the Massachusetts Court’s ruling, website owners should take steps to avoid potential risks of privacy claims related to the use of tracking technology. First, they should assess and understand the tracking or monitoring technologies in use on their website.  Once the applicable technologies are understood, website owners should consider ways to ensure transparency – clearly informing users about the types of tracking technologies being used and their purposes. This may be achieved in a number of ways, including through a comprehensive privacy policy, website banner, and/or cookie notice. Further, website owners should analyze and, as applicable, implement a means to obtain consent from users before deploying tracking or monitoring technologies.

Implementing robust data security measures to protect the collected data from unauthorized access or breaches is also essential. Regularly reviewing and updating privacy practices to comply with evolving regulations, such as the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA), and the myriad of other state consumer data privacy laws, can further safeguard against potential claims.

On November 8, 2024, the California Privacy Protection Agency (CPPA) voted to proceed with formal rulemaking regarding artificial intelligence (AI) and cybersecurity audits. This comes on the heels of the California Civil Rights Department moving forward with its own regulations about AI.

The current version of the proposed regulations covers several areas:

  1. Automated Decision-Making Technology (ADMT):

The current draft regulations propose establishing consumers’ rights to access and opt out of businesses’ use of ADMT.

They also require businesses to disclose their use of ADMT and provide meaningful information about the logic involved, as well as the significance and potential consequences of such processing for the consumer.

  1. Cybersecurity Audits:

The draft regulations propose mandating certain businesses to conduct annual cybersecurity audits to ensure compliance with the California Consumer Privacy Act (CCPA) and other relevant regulations. And specify the criteria and standards for these audits, including the scope, methodology, and reporting requirements.

  1. Risk Assessments:

The draft regulations require businesses to perform regular risk assessments to identify and mitigate potential privacy risks associated with their data processing activities.

Under the regulations, businesses would need to document their risk assessment processes and findings, and make these available to the CPPA upon request.

  1. Insurance Regulations:

 Clarifies when insurance companies must comply with the CCPA, ensuring that consumer data handled by these entities is adequately protected.

The proposed regulations will enter a 45-day public comment period, during which stakeholders can submit written and oral comments.  The CPPA will hold public hearings to gather additional feedback and discuss potential revisions to the proposed rules.

After the public comment period, the CPPA will review all feedback and make necessary adjustments to the regulations. This stage may involve multiple rounds of revisions and additional public consultations.

Once the CPPA finalizes the regulations, they will be submitted to the Office of Administrative Law (OAL) for review and approval. If approved, the regulations are expected to become effective by mid-2025.

The California Civil Rights Council published its most recent version proposed revisions to Fair Employment and Housing Act (FEHA) regulations that include automated decision-making and extended the comment period to 30 days. You can read more about the proposed revisions here from Jackson Lewis Attorneys Sayaka Karitani and Robert Yang.