Last month, the Illinois Supreme Court heard oral argument in the closely watched case of Cothron v. White Castle System Inc., and is set to decide when claims under Sections 15(b) and 15(d) of the Illinois Biometric Information Privacy Act accrue.

The court’s forthcoming decision in Cothron is likely to have a significant impact on Illinois employers who are facing BIPA litigation, or who use, or have used, biometric technology in the workplace.

Read Full Article at Law360

Subscription may be required to view article

The federal government has been trying to reach a consensus on data privacy and thus far has failed to pass legislation. On June 3, 2022, a bipartisan draft bill, titled the American Data Privacy and Protection Act was released by the Committee on Energy and Commerce. The bill intends to provide comprehensive data privacy legislation, including the development of a uniform, national data privacy framework and robust set of consumer privacy rights.

A covered entity for purposes of the draft bill is defined as “any entity or person that collects, processes, or transfers covered data” and is subject to the Federal Trade Commission Act, is a common carrier under the Communications Act of 1934, or is an organization not organized to carry on business for their own profit or that of their members.

Per the draft, the new act would be carried out by a new bureau within the Federal Trade Commission (FTC). Interestingly, the proposed legislation would preempt similar state laws, though excludes the CCPA/CPRA in California and the BIPA and the GIPA in Illinois from that preemption.

The draft bill covers a wide swath of data consumer privacy issues from data collection to civil rights and algorithms. The following are some highlights of note:

Data Collection Requirements

The draft legislation imposes a duty on all covered entities not to unnecessarily collect or use covered data with covered data being defined broadly as “information that identifies or is linked or reasonably linkable to an individual or a device that identifies or is linked or reasonably linkable to 1 or more individuals, including derived data and unique identifiers”.  The FTC would be charged with issuing additional guidance regarding what is reasonably necessary, proportionate, and limited for purposes of collecting data.

Covered entities would have a duty to implement reasonable policies, practices, and procedures for collecting processing, and transferring covered data. Further, covered entities would be required to provide individuals with privacy policies detailing data processing, transfer, and security activities in a readily available and understandable manner. The policies would need to include contact information, the affiliates of the covered entity that it transfers covered data to, and the purposes of each category of covered data the entitled collects, processes, and transfers.

Covered entities would be prohibited from conditioning or effectively conditioning the provision or termination of services or products to individuals by having individuals waive any privacy rights established under the law.

There would be additional executive responsibility for large data holders, including requiring CEOs and privacy officers to annually certify that their company maintains reasonable internal controls and reporting structures for compliance with the statute.

Individual Rights Created

Individuals would be granted the right to access, correct, delete, and portability of, covered data that pertains to them. These are similar to many of the rights California residents have under the CCPA/CPRA.  The right of access would include obtaining covered data in a human-readable and downloadable format that individuals can understand without expertise, the names of any other entities the data was transferred to, the categories of sources used to collect any covered data and the purposes for transferring the data.

Sensitive covered data, which includes items such as an individual’s health diagnosis, financial account information, biometric information, and government identifiers such as social security information, among other items, is prohibited from data collection without the individual’s affirmative consent.

Civil Rights and Algorithms

Unsurprisingly, algorithms, which were recently addressed by the EEOC and DOJ in guidance are also addressed in this draft legislation. Under the proposed legislation, covered entities may not collect, process, or transfer data in a manner that discriminates based on race, color, religion, national origin, gender, sexual orientation, or disability. This section of the law would require those large data holders that use algorithms to assess their algorithms annually and submit annual impact assessments to the FTC.

While comprehensive national privacy legislation has previously faced difficulties being passed, Jackson Lewis will continue to track the status of this legislation as it moves through Congress. If you have questions about this proposed legislation or related issues please reach out to a member of our Privacy, Data, and Cybersecurity practice group.

At the California Privacy Protection Agency (CPPA) Board meeting on June 8, 2022, the board voted to begin the rulemaking process. The Board previously released a 66-page draft of regulations, that are intended to implement and interpret the California Consumer Privacy Act (CCPA) as amended by the California Privacy Rights Act (CPRA). While the draft redline regulations addressed topics such as implementing “easy to understand” language for consumer CCPA requests, the draft does not address all of the 22 regulatory topics required under the CPRA. For example, the draft does not cover the opt-in/opt-out of automated decision-making technology.

The next steps are for the board to file a Notice of Proposed Rulemaking Action, commencing the formal rulemaking process. The notice will be posted on the agency’s website and published in the California Regulatory Notice Register. Once filing opens a public comment period of at least 45 days will start, during which stakeholders and interested parties can submit written comments. The CPPA will also schedule a public hearing on the regulations.

Jackson Lewis will continue to track information related to privacy regulations and related issues. For additional information on the CPRA, please reach out to a member of our Privacy, Data, and Cybersecurity practice group.

On June 8, 2022, the California Privacy Protection Agency (CPPA) Board, will meet to discuss and take potential action regarding a draft of its proposed regulations. The June 8th public meeting includes an agenda item where the CPPA Board will consider “possible action regarding proposed regulations … including possible notice of proposed action.”

In advance of the meeting, the CPPA posted on its website draft redline regulations for discussion purposes on the issue of revising the current regulations released by the California Attorney General (recently renumbered by the CPPA). The quietly released 66-page draft regulations, are intended to implement and interpret the California Consumer Privacy Act (CCPA) as amended by the California Privacy Rights Act (CPRA). While the draft redline regulations address topics such as implementing “easy to understand” language for consumer CCPA requests, the draft does not address all of the 22 regulatory topics required under the CPRA. For example, the draft does not cover the opt-in/opt-out of automated decision making technology.

Here are some of the highlights of the proposed draft regulations:

  • Adds a definition of “disproportionate effort” within the context of responding to a consumer requests. For example, disproportionate effort might be involved when the personal information which is the subject of the request is not in a searchable or readily-accessible format, is maintained only for legal or compliance purposes, is not sold or used for any commercial purpose, and would not impact the consumer in any material manner;
  • Adds a new section on the restrictions on the collection and use of personal information that contains illustrative examples. One example is a business that offers a mobile flashlight app. That business would need the consumer’s explicit consent to collect a consumer geolocation information because that personal information is incompatible with the context in which the personal information is collected in connection with the app;
  • Adds requirements for disclosures and communications to consumers. This includes making sure communications are reasonably accessible to consumers with disabilities whether online or offline;
  • Adds requirements for methods for submitting CCPA requests and obtaining consumer consent. A key principle here is to ensure that the process for consumers to select a more privacy-protective options should not be more difficult or longer than a less protective option. Symmetry is the goal; and
  • Makes substantial revisions to the requirements for the privacy policy that a business is required to provide to consumers detailing the business’s online and offline practices regarding collection, use, sale, sharing, and retention of personal information. This includes new provisions concerning the right to limit the use and disclosure of sensitive personal information and the right to correct personal information.

To date, the Agency has not issued a Notice of Proposed Rulemaking to start the formal rulemaking process, but the timeframe associated with the draft regulations is still unclear – especially when the CPRA requires the CPPA to finalize regulations by July 1, 2022. It is expected that the June 8th meeting will provide details on the process.

Jackson Lewis will continue to track information related to privacy regulations and related issues. For additional information on the CPRA, please reach out to a member of our Privacy, Data, and Cybersecurity practice group

Organizations attacked with ransomware have a bevy of decisions to make, very quickly! One of those decisions is whether to pay the ransom. Earlier this year, I had the honor of contributing to a two-part series, entitled Ransomware: To pay or not to pay? (Part 1 and Part 2). Joined by Danielle Gardiner, CPA, CFF, SVP of Lowers Forensics International, and Shiraz Saeed, VP, Cyber Risk Product Leader at Arch Insurance Group, we explored a range of considerations that organizations need to weigh when making this consequential and potentially decision under very difficult circumstances. A new law in North Carolina makes this decision a lot easier for certain public sector entities in the state – they cannot pay.

North Carolina’s Current Operations Appropriations Act of 2021 added a new article to Chapter 143 of the State’s General Statutes which now reads in part:

No State agency or local government entity shall submit payment or otherwise communicate with an entity that has engaged in a cybersecurity incident on an information technology system by encrypting data and then subsequently offering to decrypt that data in exchange for a ransom payment.

If a state agency or local government entity experiences a “ransom request” in connection with a cybersecurity incident, it must consult with the state’s Department of Information Technology. These rules apply to the following entities:

  • State agency – Any agency, department, institution, board, commission, committee, division, bureau, officer, official, or other entity of the executive, judicial, or legislative branches of State government. The term includes The University of North Carolina and any other entity for which the State has oversight responsibility.
  • Local government entity – A local political subdivision of the State, including, but not limited to, a city, a county, a local school administrative unit as defined in G.S. 115C‑5, or a community college.

Double extortion ransomware attacks raise an interesting issue under this law. These attacks are more sinister because they do not just encrypt the victim’s system and demand payment in exchange for a decryption utility. They also exfiltrate data from the victim’s systems, threatening to disclose it on the dark web unless the attacker is paid ransom in exchange for the “promise” to delete and not disclose the data.

It is unclear if the North Carolina law reaches this second extortion in double extortion ransomware attacks, but its proscription is consistent with the Federal Bureau of Investigation’s position; the FBI does not support paying a ransom in response to a ransomware attack. But when the possibility of payment is on the table, organizations need to know that simply making the payment could put them into legal jeopardy.

As stated in Ransomware: To pay or not to pay? – Part 2:

The primary basis of this legal jeopardy is that under the International Emergency Economic Powers Act (IEEPA) and the Trading with the Enemy Act (TWEA) U.S. persons engaging in transactions with certain listed organizations can subject those persons to significant penalties. Specifically, the U.S. Department of Treasury’s Office of Foreign Assets Control (OFAC) maintains the Specially Designated Nationals and Blocked Persons List (SDN List), in addition to other blocked persons. A cryptocurrency transaction with one of these entities may result in the victim’s ability to retrieve access to its systems and data, but it could subject the organization to OFAC enforcement.

In its latest round of guidance on this issue, on October 1, 2020, OFAC issued the Advisory on Potential Sanctions Risks for Facilitating Ransomware Payments (Advisory). The Advisory makes clear that entities involved in facilitating a ransom payment may have done so in violation of OFAC regulations, subjecting them to enforcement action and fines. This risk is heightened by the difficulty of determining who is on the other side of the Bitcoin transaction.

The Advisory highlights these concerns, while also noting that certain pre- and post-breach actions could mitigate OFAC exposure. Implementing “a risk-based compliance program” pre-breach and promptly making a “complete report of a ransomware attack to law enforcement” after an attack can, according to the Advisory, mitigate enforcement.

OFAC compliance may not be the only regulatory hurdle to overcome if momentum is moving in favor of payment. In the summer of 2021, following a string of massive ransomware attacks including the Colonial Pipeline attack referred to above, four states proposed legislation that would ban ransom payments. These efforts were not successful to date, but organizations need to consider regulatory limitations on ransom payments as privacy and cybersecurity laws rapidly evolve.

Ransomware attacks can happen to any organization, large or small. It is critical that organizations not only strengthen their systems to prevent such attacks. They should also strengthen their preparedness should an attack happen. This includes thinking through ahead of time the organization’s approach to the question of whether pay or not to pay ransom.

States continue to tinker with their breach notification laws. The latest modification to the Indiana statute relates to the timing of notification. On March 18, 2022, Indiana Governor Eric Holcomb, signed HB 1351 which tightens the rules for providing timely notice to individuals affected by a data breach.

Prior to the change, the relevant section read:

a person required to make a disclosure or notification under this chapter shall make the disclosure or notification without unreasonable delay

After the change, which is effective July 1, 2022, it reads:

a person required to make a disclosure or notification under this chapter shall make the disclosure or notification without unreasonable delay, but not more than forty-five (45) days after the discovery of the breach.

As revised, the statute attempts to make clear the last date by which notification must be provided – not later than 45 days after discovery. But there remains a question about whether notification could be (or should be) provided before that 45-day period ends. After discovery of a breach, a period of delay in notification is permitted if it is reasonable. The current law describes reasonable delay, as follows:

[A] delay is reasonable if the delay is:

(1) necessary to restore the integrity of the computer system;

(2) necessary to discover the scope of the breach; or

(3) in response to a request from the attorney general or a law enforcement agency to delay disclosure because disclosure will:

(A) impede a criminal or civil investigation; or

(B) jeopardize national security.

IC 24-4.9-3-3(a).

This analysis can become significantly more complicated when residents in multiple states are affected by the breach. Although some states have a similar 45-day notification deadline (e.g., Alabama, Maryland, Ohio, and Wisconsin), other states have a shorter periods (e.g., 30 days in Florida) or a longer period (e.g., 60 days in Connecticut). Additionally, in our experience as breach counsel, covered entities with breach reporting obligations can expect heightened attention by the Indiana Attorney General’s Office to investigate perceived delays in notification. This change reflects that focus on timeliness and may become the source of greater enforcement activity in Indiana.

On May 20, 2022, the Federal Trade Commission’s Team CTO and the Division of Privacy and Identity Protection published a blog post entitled, “Security Beyond Prevention: The Importance of Effective Breach Disclosures.” In the post, the FTC takes the position that in some cases there may be a de facto data breach notification requirement, despite there currently being no section of the Federal Trade Commission Act or implementing regulation imposing an express, broadly applicable data breach notification requirement. Businesses should nonetheless take this de facto rule into account as part of their incident response plans.

The post stresses the importance of strong incident detection and response processes, noting they are vital to maintaining reasonable security.  The notification component can prevent and minimize consumer harm from breaches because, among other things, it can spur consumers to take actions to mitigate harm resulting from the breach. According to the FTC, failure to maintain such practices could indicate a lack of competition in the marketplace. Notably, the post states:

Regardless of whether a breach notification law applies, a breached entity that fails to disclose information to help parties mitigate reasonably foreseeable harm may violate Section 5 of the FTC Act.

The American Recovery and Reinvestment Act of 2009 directed the FTC to establish rules to require notification to consumers when the security of their individually identifiable health information has been breached. However, those rules apply only to vendors of personal health records and related entities, although a recent FTC policy statement clarified the application of the rule. In support of the blog post’s more broadly applicable de facto requirement, the FTC discussed some recent enforcement actions.

The post referred to the recent settlement with an online retailer that allegedly failed to timely notify consumers and other relevant parties after data breaches, thereby preventing parties from taking measures to mitigate harm.  The FTC viewed this as an unfair trade practice. Looking to other enforcement actions as examples, the post explained that deceptive statements can hinder consumers from taking critical actions to mitigate foreseeable harms like identity theft, loss of sensitive data, or financial impacts. Looking at these cases together, the post concluded that:

companies have legal obligations with respect to disclosing breaches, and that these disclosures should be accurate and timely.

As any victim of a data incident or experienced breach counsel knows, a critical part of just about any security incident is determining whether there has been a breach and whether notification is required. For an incident affecting individuals in a significant number of countries and/or states, navigating the various data breach statutes and regulations is challenging. According to the FTC’s post, even if that process leads to the conclusion that notification is not required under state law, for example, the FTC’s de facto rule may apply to avoid an allegation of unfair or deceptive trade practice.

Business, therefore, should review their incident response plans in light of this informal guidance.

When the California Consumer Privacy Act of 2018 (CCPA) became law, it was only a matter of time before other states adopted their own statutes intending to enhance privacy rights and consumer protection for their residents. After overwhelming support in the state legislature, Connecticut is about to become the fifth state with a comprehensive privacy law, as SB 6 awaits signature by Governor Ned Lamont.

If signed, the “Act Concerning Personal Data Privacy and Online Monitoring” (Act) will take effect July 1, 2023, the same day as the Colorado Consumer Privacy Act.

Key Elements

As noted, the Act largely tracks the Virginia Consumer Data Protection Act (VCDPA) and has the following key elements:

  • Jurisdictional Scope. The Act would apply to persons that conduct business in Connecticut or that produce products or services that are targeted to residents of Connecticut and that during the preceding calendar year: (i) controlled or processed personal data of at least 75,000 consumers (under the VCDPA this threshold is at least 100,000 Virginians) or (ii) controlled or processed personal data of at least 25,000 consumers and derived over 25 percent of gross revenue from the sale of personal data (50 percent under the VCDPA).
  • Exemptions. The Act provides exemptions at two levels, the entity level and the data level. Entities exempted from the Act include (i) agencies, commissions, districts, etc. of the state or political subdivisions, (ii) nonprofits, (iii) higher education, (iv) national securities associations, (v) financial institutions or data subject to Gramm-Leach-Bliley Act (GLBA), and (vi) covered entities and business associates as defined under HIPAA.

The Act also exempts a long list of categories of information including protected health information under HIPAA and certain identifiable private information in connection with human subject research. The Act also exempts certain personal information under the Fair Credit Reporting Act, Driver’s Privacy Protection Act of 1994, Family Educational Rights and Privacy Act, and other laws. In general, exempt data also includes data processed or maintained (i) in the course of an individual applying to, employed by or acting as an agent or independent contractor to the extent that the data is collected and used within the context of that role, (ii) as emergency contact information, or (iii) that is necessary to retain to administer benefits for another individual relating to the individual in (i) above.

  • Personal Data. Similar to the CCPA and GDPR, the Act defines personal data broadly to include any information that is linked or reasonably linkable to an identified or identifiable individual, but excludes de-identified data or publicly available information. However, maintaining deidentified information is not without obligation under the Act. Controllers that maintain such information must take reasonable measures to ensure that the data cannot be reidentified. They must also publicly commit to maintaining and using de-identified data without attempting to reidentify it. Finally, the controller must contractually obligate any recipients of the de-identified data to comply with the Act.
  • Sensitive Data. Similar to the VCDPA, the Act includes a category for “sensitive data.” This is defined as (i) data revealing racial or ethnic origin, religious beliefs, mental or physical health condition or diagnosis, sex life, sexual orientation or citizenship or immigration status, (ii) the processing of genetic or biometric data for the purpose of uniquely identifying an individual, (iii) personal data collected from a known child, or (iv) precise geolocation data.  Notably, sensitive data cannot be processed without consumer consent. In the case of sensitive data of a known child, the data must be processed according to the federal Children’s Online Privacy Protection Act (COPPA).  Also, controllers must conduct and document a data protection assessment specifically for the processing of sensitive data.
  • Consumer. The Act defines “consumer” as “an individual who is a resident of” Connecticut. Consumers under the Act do not include individuals acting (i) in a commercial or employment context or (ii) as employee, owner, director, officer or contractor of certain entities including a government agency whose communications or transactions with the controller occur solely within the context of that individual’s role with that entity.
  • Consumer Rights. Consumers under the Act would be afforded the following personal data rights:
    • To confirm whether or not a controller is processing their personal data and to access such personal data;
    • To correct inaccuracies in their personal data, taking into account the nature of the personal data and the purposes of the processing of their personal data;
    • To delete personal data provided by or obtained about them;
    • To obtain a copy of their personal data processed by the controller, in a portable and, to the extent technically feasible, readily usable format that allows them to transmit the data to another controller without hindrance, where the processing is carried out by automated means and without revealing trade secrets; and
    • To opt out of the processing of the personal data for purposes of (i) targeted advertising, (ii) sale, or (iii) profiling in furtherance of decisions that produce legal or similarly significant effects concerning them.
  • Reasonable Data Security Requirement. The Act affirmatively requires controllers to establish, implement, and maintain reasonable administrative, technical and physical data security practices to protect the confidentiality, integrity and accessibility of personal data appropriate to the volume and nature of the personal data at issue.
  • Data Protection AssessmentsThe Act imposes a new requirement for controllers: conduct data protection assessments (as mentioned above regarding sensitive data). Controllers must conduct and document data protection assessments for specific processing activities involving personal data that present a heightened risk of harm to consumers. These activities include targeted advertising, sale of personal data, profiling, processing of sensitive data. Profiling activities will require a data protection assessment when it would present a reasonably foreseeable risk of (A) unfair or deceptive treatment of, or unlawful disparate impact on, consumers, (B) financial, physical or reputational injury to consumers, (C) a physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of consumers, where such intrusion would be offensive to a reasonable person, or (D) other substantial injury to consumers. When conducting such assessments controllers must identify and weigh the benefits that may flow, directly and indirectly, from the processing to the controller, the consumer, other stakeholders, and the public against the potential risks to the rights of the consumer. Controllers also can consider how those risks are mitigated by safeguards that can be employed by the controller. Factors controllers must consider include the use of de-identified data and the reasonable expectations of consumers, as well as the context of the processing and the relationship between the controller and the consumer whose personal data will be processed.
  • Enforcement. The Connecticut Attorney General’s office would have exclusive enforcement over the Act. During the first eighteen months the Act is effective, until December 31, 2024, controllers would be provided notice of a violation and will have a 60-day cure period. After that, the opportunity to cure may be granted depending on the Attorney General’s assessment of factors such as the number of violations, the size of the controller or processor, the nature of the processing activities, among others. Violations of the Act constitute an unfair trade practice under Connecticut’s Unfair and Deceptive Acts and Practices (UDAP) law. Under the UDAP, violations are subject to civil penalties of up to $5,000, plus actual and punitive damages and attorneys’ fees. The Act expressly excludes a private right of action.

Takeaway

Other states across the country are contemplating ways to enhance their data privacy and security protections. Organizations, regardless of their location, should be assessing and reviewing their data collection activities, building robust data protection programs, and investing in written information security programs.

“The EEOC is keenly aware that [artificial intelligence and algorithmic decision-making] tools may mask and perpetuate bias or create new discriminatory barriers to jobs. We must work to ensure that these new technologies do not become a high-tech pathway to discrimination.”

Statement from EEOC Chair Charlotte A. Burrows in late October 2021 announcing the employment agency’s launching an initiative to ensure artificial intelligence (AI) and other emerging tools used in hiring and other employment decisions comply with federal civil rights laws.

The EEOC is not alone in its concerns about the use of AI, machine learning, and related technologies in employment decision-making activities. On March 25, 2022, California’s Fair Employment and Housing Council discussed draft regulations regarding automated-decision systems. The draft regulations were informed by testimony at a hearing last year the Department of Fair Employment and Housing (DFEH) held on Algorithms & Bias.

Algorithms are increasingly making significant impacts on people’s lives, including in connection with important employment decisions, such as job applicant screening. Depending on the design of these newer technologies and the data used, AI and similar tools risk perpetrating biases that are hard to detect. Of course, the AI conundrum is not limited to employment. Research in the US and China, for example, suggests AI biases can lead to disparities in healthcare.

Under the draft regulations, the DFEH attempts to update its regulations to include newer technologies such as algorithms it refers to as an “automated decision system” (ADS). The draft regulation defines ADS as: a computational process, including one derived from machine-learning, statistics, or other data processing or artificial intelligence techniques, that screens, evaluates, categorizes, recommends, or otherwise makes a decision or facilitates human decision making that impacts employees or applicants.

Examples of ADS include:

  • Algorithms that screen resumes for particular terms or patterns
  • Algorithms that employ face and/or voice recognition to analyze facial expressions, word choices, and voices
  • Algorithms that employ gamified testing that include questions, puzzles, or other challenges are used to make predictive assessments about an employee or applicant to measure characteristics including but not limited to dexterity, reaction time, or other physical or mental abilities or characteristics
  • Algorithms that employ online tests meant to measure personality traits, aptitudes, cognitive abilities, and/or cultural fit

The draft regulations would make it unlawful for an employer or covered entity to use qualification standards, employment tests, ADS, or other selection criteria that screen out or tend to screen out an applicant or employee or a class of applicants or employees based on characteristics protected by the Fair Employment and Housing Act (FEHA), unless the standards, tests, or other selection criteria, are shown to be job-related for the position in question and are consistent with business necessity.

The draft regulations include rules for both the applicant selection and interview processes. Specifically, the use of and reliance upon ADS that limit or screen out or tend to limit or screen out applicants based on protected characteristics may constitute a violation of the FEHA.

The draft regulations would expand employers’ record-keeping requirements by requiring them to include machine-learning data as part of the record-keeping requirement, and by extending the retention period for covered records under the current regulations from two to four years. Additionally, the draft regulations would add a record retention requirement for any person “who engages in the advertisement, sale, provision, or use of a selection tool, including but not limited to an automated-decision system, to an employer or other covered entity.” These persons, who might include third-party vendors supporting employers’ use of such technologies, would be required to retain records of the assessment criteria used by the ADS for each employer or covered entity.

During the March 25th meeting, it was stressed that the regulations are intended to show how current law applies to new technology and not intended to propose new liabilities. This remains to be seen as the effect of these new regulations, if adopted, could expand exposure to liability or at least more challenges to employers leveraging these technologies.

The regulations are currently in the pre-rule-making phase and the DFEH is accepting public comment on the regulations. Comments about the regulations can be submitted to the Fair Employment and Housing Council at FEHCouncil@dfeh.ca.gov.

Jackson Lewis will continue to track regulations affecting employers. If you have questions about the use of automated decision-making in the workplace or related issues, contact the Jackson Lewis attorney with whom you regularly work.

It can be cathartic responding to a negative online review. It can also backfire, as can failing to cooperate with an OCR investigation as required under HIPAA.

The Office for Civil Rights (OCR) recently announced four enforcement actions, one against a small dental practice that imposed a $50,000 civil monetary penalty under HIPAA. The OCR alleged the dentist impermissibly disclosed a patient’s protected health information (PHI) when the dentist responded to a patient’s negative online review. According to the OCR, the dentist’s response to the patient read:

It’s so fascinating to see [Complainant’s full name] make unsubstantiated accusations when he only came to my practice on two occasions since October 2013. He never came for his scheduled appointments as his treatment plans submitted to his insurance company were approved. He last came to my office on March 2014 as an emergency patient due to excruciating pain he was experiencing from the lower left quadrant. He was given a second referral for a root canal treatment to be performed by my endodontist colleague. Is that a bad experience? Only from someone hallucinating. When people want to express their ignorance, you don’t have to do anything, just let them talk. He never came back for his scheduled appointment Does he deserve any rating as a patient? Not even one star. I never performed any procedure on this disgruntled patient other than oral examinations. From the foregoing, it’s obvious that [Complainant’s full name] level of intelligence is in question and he should continue with his manual work and not expose himself to ridicule. Making derogatory statements will not enhance your reputation in this era [Complainant’s full name]. Get a life.

This is not the first time a dentist was fined by the OCR in connection with responding to a patient’s online review. In 2019, it was a Yelp review that resulted in a $10,000 penalty. So, why is the OCR imposing five times that penalty in this matter?

In short, the OCR explained the covered dental provider “did not respond to OCR’s data request, did not respond or object to an administrative subpoena, and waived its rights to a hearing by not contesting the findings in OCR’s Notice of Proposed Determination.” According to the OCR, among other things, the dentist has not removed the response to the patient’s online review.

Online review platforms, such as provided by Google and Yelp, can be important for small healthcare providers and other small businesses to promote their practices, businesses, and facilitate their interaction with persons they serve. However, caution should be exercised. Disclosing a patient’s identity and the patient’s health status in a response to an adverse online review without the patient’s authorization is likely a violation of the HIPAA Privacy Rule. If not careful, and in the absence of a clear policy, casual and informal communications between practice staff and patients could expose the practice to significant risk.

But based on how this case turned out, a refusal to cooperate with the resulting OCR investigation can trigger a more significant HIPAA penalty.

So, what should small dental, physician and other healthcare practices be doing to address these risks:

  • Get complaint with HIPAA and Maintain Policies on Disclosures in Social Media! In this case, for example, the OCR noted that HIPAA covered healthcare providers should have policies and procedures related to the disclosures of PHI and more specifically with regard to disclosures of PHI on social media.
  • Train staff (including healthcare providers and owners) concerning these policies. Here, the OCR asked for copies of these policies. That is, the OCR did not only want to see a sign-in sheet showing staff attended the training, the agency wanted to see the policies that the training was based on.
  • Maintain a HIPAA Notice of Privacy Practice. At a minimum, this should be posted in the office and on the practice’s website, as applicable.
  • Monitor social media activity by staff. Understand the social media channels that the practice engages in and consider periodically monitoring public social media activity by staff.
  • Cooperate with the OCR. Covered entities should absolutely make their case to the OCR in defense of a compliance review or investigation. At the same time, being responsive to the agency’s requests can go a long way toward resolving the matter quickly and with minimal impact. Having experienced legal counsel versed in the HIPAA Privacy and Security Rules to guide the practice can be tremendously helpful.