When the California Consumer Privacy Act of 2018 (CCPA) became law, it was only a matter of time before other states adopted their own statutes intending to enhance privacy rights and consumer protection for their residents. After overwhelming support in the state legislature, Connecticut is about to become the fifth state with a comprehensive privacy law, as SB 6 awaits signature by Governor Ned Lamont.

If signed, the “Act Concerning Personal Data Privacy and Online Monitoring” (Act) will take effect January 1, 2023, the same day as the Colorado Consumer Privacy Act.

Key Elements

As noted, the Act largely tracks the Virginia Consumer Data Protection Act (VCDPA) and has the following key elements:

  • Jurisdictional Scope. The Act would apply to persons that conduct business in Connecticut or that produce products or services that are targeted to residents of Connecticut and that during the preceding calendar year: (i) controlled or processed personal data of at least 75,000 consumers (under the VCDPA this threshold is at least 100,000 Virginians) or (ii) controlled or processed personal data of at least 25,000 consumers and derived over 25 percent of gross revenue from the sale of personal data (50 percent under the VCDPA).
  • Exemptions. The Act provides exemptions at two levels, the entity level and the data level. Entities exempted from the Act include (i) agencies, commissions, districts, etc. of the state or political subdivisions, (ii) nonprofits, (iii) higher education, (iv) national securities associations, (v) financial institutions or data subject to Gramm-Leach-Bliley Act (GLBA), and (vi) hospitals as defined under Connecticut law. Note that the Act does not include a broad-based, entity-level  exemption for covered entities and business associates as defined under HIPAA.

The Act also exempts a long list of categories of information including protected health information under HIPAA and certain identifiable private information in connection with human subject research. The Act also exempts certain personal information under the Fair Credit Reporting Act, Driver’s Privacy Protection Act of 1994, Family Educational Rights and Privacy Act, and other laws. In general, exempt data also includes data processed or maintained (i) in the course of an individual applying to, employed by or acting as an agent or independent contractor to the extent that the data is collected and used within the context of that role, (ii) as emergency contact information, or (iii) that is necessary to retain to administer benefits for another individual relating to the individual in (i) above.

  • Personal Data. Similar to the CCPA and GDPR, the Act defines personal data broadly to include any information that is linked or reasonably linkable to an identified or identifiable individual, but excludes de-identified data or publicly available information. However, maintaining deidentified information is not without obligation under the Act. Controllers that maintain such information must take reasonable measures to ensure that the data cannot be reidentified. They must also publicly commit to maintaining and using de-identified data without attempting to reidentify it. Finally, the controller must contractually obligate any recipients of the de-identified data to comply with the Act.
  • Sensitive Data. Similar to the VCDPA, the Act includes a category for “sensitive data.” This is defined as (i) data revealing racial or ethnic origin, religious beliefs, mental or physical health condition or diagnosis, sex life, sexual orientation or citizenship or immigration status, (ii) the processing of genetic or biometric data for the purpose of uniquely identifying an individual, (iii) personal data collected from a known child, or (iv) precise geolocation data.  Notably, sensitive data cannot be processed without consumer consent. In the case of sensitive data of a known child, the data must be processed according to the federal Children’s Online Privacy Protection Act (COPPA).  Also, controllers must conduct and document a data protection assessment specifically for the processing of sensitive data.
  • Consumer. The Act defines “consumer” as “an individual who is a resident of” Connecticut. Consumers under the Act do not include individuals acting (i) in a commercial or employment context or (ii) as employee, owner, director, officer or contractor of certain entities including a government agency whose communications or transactions with the controller occur solely within the context of that individual’s role with that entity.
  • Consumer Rights. Consumers under the Act would be afforded the following personal data rights:
    • To confirm whether or not a controller is processing their personal data and to access such personal data;
    • To correct inaccuracies in their personal data, taking into account the nature of the personal data and the purposes of the processing of their personal data;
    • To delete personal data provided by or obtained about them;
    • To obtain a copy of their personal data processed by the controller, in a portable and, to the extent technically feasible, readily usable format that allows them to transmit the data to another controller without hindrance, where the processing is carried out by automated means and without revealing trade secrets; and
    • To opt out of the processing of the personal data for purposes of (i) targeted advertising, (ii) sale, or (iii) profiling in furtherance of decisions that produce legal or similarly significant effects concerning them.
  • Reasonable Data Security Requirement. The Act affirmatively requires controllers to establish, implement, and maintain reasonable administrative, technical and physical data security practices to protect the confidentiality, integrity and accessibility of personal data appropriate to the volume and nature of the personal data at issue.
  • Data Protection AssessmentsThe Act imposes a new requirement for controllers: conduct data protection assessments (as mentioned above regarding sensitive data). Controllers must conduct and document data protection assessments for specific processing activities involving personal data that present a heightened risk of harm to consumers. These activities include targeted advertising, sale of personal data, profiling, processing of sensitive data. Profiling activities will require a data protection assessment when it would present a reasonably foreseeable risk of (A) unfair or deceptive treatment of, or unlawful disparate impact on, consumers, (B) financial, physical or reputational injury to consumers, (C) a physical or other intrusion upon the solitude or seclusion, or the private affairs or concerns, of consumers, where such intrusion would be offensive to a reasonable person, or (D) other substantial injury to consumers. When conducting such assessments controllers must identify and weigh the benefits that may flow, directly and indirectly, from the processing to the controller, the consumer, other stakeholders, and the public against the potential risks to the rights of the consumer. Controllers also can consider how those risks are mitigated by safeguards that can be employed by the controller. Factors controllers must consider include the use of de-identified data and the reasonable expectations of consumers, as well as the context of the processing and the relationship between the controller and the consumer whose personal data will be processed.
  • Enforcement. The Connecticut Attorney General’s office would have exclusive enforcement over the Act. During the first eighteen months the Act is effective, until December 31, 2024, controllers would be provided notice of a violation and will have a 60-day cure period. After that, the opportunity to cure may be granted depending on the Attorney General’s assessment of factors such as the number of violations, the size of the controller or processor, the nature of the processing activities, among others. Violations of the Act constitute an unfair trade practice under Connecticut’s Unfair and Deceptive Acts and Practices (UDAP) law. Under the UDAP, violations are subject to civil penalties of up to $5,000, plus actual and punitive damages and attorneys’ fees. The Act expressly excludes a private right of action.

Takeaway

Other states across the country are contemplating ways to enhance their data privacy and security protections. Organizations, regardless of their location, should be assessing and reviewing their data collection activities, building robust data protection programs, and investing in written information security programs.

“The EEOC is keenly aware that [artificial intelligence and algorithmic decision-making] tools may mask and perpetuate bias or create new discriminatory barriers to jobs. We must work to ensure that these new technologies do not become a high-tech pathway to discrimination.”

Statement from EEOC Chair Charlotte A. Burrows in late October 2021 announcing the employment agency’s launching an initiative to ensure artificial intelligence (AI) and other emerging tools used in hiring and other employment decisions comply with federal civil rights laws.

The EEOC is not alone in its concerns about the use of AI, machine learning, and related technologies in employment decision-making activities. On March 25, 2022, California’s Fair Employment and Housing Council discussed draft regulations regarding automated-decision systems. The draft regulations were informed by testimony at a hearing last year the Department of Fair Employment and Housing (DFEH) held on Algorithms & Bias.

Algorithms are increasingly making significant impacts on people’s lives, including in connection with important employment decisions, such as job applicant screening. Depending on the design of these newer technologies and the data used, AI and similar tools risk perpetrating biases that are hard to detect. Of course, the AI conundrum is not limited to employment. Research in the US and China, for example, suggests AI biases can lead to disparities in healthcare.

Under the draft regulations, the DFEH attempts to update its regulations to include newer technologies such as algorithms it refers to as an “automated decision system” (ADS). The draft regulation defines ADS as: a computational process, including one derived from machine-learning, statistics, or other data processing or artificial intelligence techniques, that screens, evaluates, categorizes, recommends, or otherwise makes a decision or facilitates human decision making that impacts employees or applicants.

Examples of ADS include:

  • Algorithms that screen resumes for particular terms or patterns
  • Algorithms that employ face and/or voice recognition to analyze facial expressions, word choices, and voices
  • Algorithms that employ gamified testing that include questions, puzzles, or other challenges are used to make predictive assessments about an employee or applicant to measure characteristics including but not limited to dexterity, reaction time, or other physical or mental abilities or characteristics
  • Algorithms that employ online tests meant to measure personality traits, aptitudes, cognitive abilities, and/or cultural fit

The draft regulations would make it unlawful for an employer or covered entity to use qualification standards, employment tests, ADS, or other selection criteria that screen out or tend to screen out an applicant or employee or a class of applicants or employees based on characteristics protected by the Fair Employment and Housing Act (FEHA), unless the standards, tests, or other selection criteria, are shown to be job-related for the position in question and are consistent with business necessity.

The draft regulations include rules for both the applicant selection and interview processes. Specifically, the use of and reliance upon ADS that limit or screen out or tend to limit or screen out applicants based on protected characteristics may constitute a violation of the FEHA.

The draft regulations would expand employers’ record-keeping requirements by requiring them to include machine-learning data as part of the record-keeping requirement, and by extending the retention period for covered records under the current regulations from two to four years. Additionally, the draft regulations would add a record retention requirement for any person “who engages in the advertisement, sale, provision, or use of a selection tool, including but not limited to an automated-decision system, to an employer or other covered entity.” These persons, who might include third-party vendors supporting employers’ use of such technologies, would be required to retain records of the assessment criteria used by the ADS for each employer or covered entity.

During the March 25th meeting, it was stressed that the regulations are intended to show how current law applies to new technology and not intended to propose new liabilities. This remains to be seen as the effect of these new regulations, if adopted, could expand exposure to liability or at least more challenges to employers leveraging these technologies.

The regulations are currently in the pre-rule-making phase and the DFEH is accepting public comment on the regulations. Comments about the regulations can be submitted to the Fair Employment and Housing Council at FEHCouncil@dfeh.ca.gov.

Jackson Lewis will continue to track regulations affecting employers. If you have questions about the use of automated decision-making in the workplace or related issues, contact the Jackson Lewis attorney with whom you regularly work.

It can be cathartic responding to a negative online review. It can also backfire, as can failing to cooperate with an OCR investigation as required under HIPAA.

The Office for Civil Rights (OCR) recently announced four enforcement actions, one against a small dental practice that imposed a $50,000 civil monetary penalty under HIPAA. The OCR alleged the dentist impermissibly disclosed a patient’s protected health information (PHI) when the dentist responded to a patient’s negative online review. According to the OCR, the dentist’s response to the patient read:

It’s so fascinating to see [Complainant’s full name] make unsubstantiated accusations when he only came to my practice on two occasions since October 2013. He never came for his scheduled appointments as his treatment plans submitted to his insurance company were approved. He last came to my office on March 2014 as an emergency patient due to excruciating pain he was experiencing from the lower left quadrant. He was given a second referral for a root canal treatment to be performed by my endodontist colleague. Is that a bad experience? Only from someone hallucinating. When people want to express their ignorance, you don’t have to do anything, just let them talk. He never came back for his scheduled appointment Does he deserve any rating as a patient? Not even one star. I never performed any procedure on this disgruntled patient other than oral examinations. From the foregoing, it’s obvious that [Complainant’s full name] level of intelligence is in question and he should continue with his manual work and not expose himself to ridicule. Making derogatory statements will not enhance your reputation in this era [Complainant’s full name]. Get a life.

This is not the first time a dentist was fined by the OCR in connection with responding to a patient’s online review. In 2019, it was a Yelp review that resulted in a $10,000 penalty. So, why is the OCR imposing five times that penalty in this matter?

In short, the OCR explained the covered dental provider “did not respond to OCR’s data request, did not respond or object to an administrative subpoena, and waived its rights to a hearing by not contesting the findings in OCR’s Notice of Proposed Determination.” According to the OCR, among other things, the dentist has not removed the response to the patient’s online review.

Online review platforms, such as provided by Google and Yelp, can be important for small healthcare providers and other small businesses to promote their practices, businesses, and facilitate their interaction with persons they serve. However, caution should be exercised. Disclosing a patient’s identity and the patient’s health status in a response to an adverse online review without the patient’s authorization is likely a violation of the HIPAA Privacy Rule. If not careful, and in the absence of a clear policy, casual and informal communications between practice staff and patients could expose the practice to significant risk.

But based on how this case turned out, a refusal to cooperate with the resulting OCR investigation can trigger a more significant HIPAA penalty.

So, what should small dental, physician and other healthcare practices be doing to address these risks:

  • Get complaint with HIPAA and Maintain Policies on Disclosures in Social Media! In this case, for example, the OCR noted that HIPAA covered healthcare providers should have policies and procedures related to the disclosures of PHI and more specifically with regard to disclosures of PHI on social media.
  • Train staff (including healthcare providers and owners) concerning these policies. Here, the OCR asked for copies of these policies. That is, the OCR did not only want to see a sign-in sheet showing staff attended the training, the agency wanted to see the policies that the training was based on.
  • Maintain a HIPAA Notice of Privacy Practice. At a minimum, this should be posted in the office and on the practice’s website, as applicable.
  • Monitor social media activity by staff. Understand the social media channels that the practice engages in and consider periodically monitoring public social media activity by staff.
  • Cooperate with the OCR. Covered entities should absolutely make their case to the OCR in defense of a compliance review or investigation. At the same time, being responsive to the agency’s requests can go a long way toward resolving the matter quickly and with minimal impact. Having experienced legal counsel versed in the HIPAA Privacy and Security Rules to guide the practice can be tremendously helpful.

On February 23, 2022, the EU Commission published a Proposal for a Regulation on harmonized rules on the access to and use of data as part of its strategy for making the EU a leader in the data-driven society. The “Data Act” addresses the access, use and porting of “industrial data” generated in the EU by connected objects and related services.  The Act further ensures this data will be shared, stored and processed in accordance with EU rules, including when the dataset contains personal data.

Scope

The proposed Regulation applies specifically to data from the usage of connected objects and related services (e.g., software). Data means any digital representation of acts, facts or information including in an audio, visual or audio-visual format. While the Regulation applies to data derived from usage and events, it does not apply to information derived or inferred from this data.

Connected devices (i.e., IoT) include vehicles, home equipment, consumer goods, medical and health devices, and agricultural or industrial machinery (i.e., IoT) that generate performance, usage or environmental data. Products designed primarily to display, play, record, or transmit content such as personal computers, servers, tablets, smart phones, cameras, webcams, sound recording systems, and text scanners are not covered by the Act.

The Regulation applies to (a) manufacturers of products and suppliers of related services placed on the market in the Union (b) users of such products or services; (b) data holders that make data available to data recipients in the Union; (c) data recipients in the Union to whom data are made available; (d) public sector bodies and Union institutions, agencies or bodies that request data holders to make data available where there is an exceptional need for the performance of a task carried out in the public interest and the data holders that provide those data in response to such request; and (e) providers of data processing services offering such services to customers in the Union.

Relevant Provisions

  • Manufacturers and designers must provide consumers and businesses with access to and use of data derived from utilization of connected devices they own, rent or lease as well as related services. This is data that is traditionally captured and held by the manufacturer or designer and the device owner’s right to the data is often unclear. Under the Act, the device owner will be able to use the data for after-market purposes. For example, a car owner might share usage data with their insurance company, or a business owner might use data from a connected manufacturing device to perform its own maintenance in lieu of using the manufacturer’s services. In support of these measures, manufacturers and designers must disclose what data is accessible and design products and services so the data is easily accessible by default.
  • Data sharing agreements between parties must avoid contractual terms that place SMEs at a disadvantage. The Act includes a test to assess the fairness of the contractual terms. The EU Commission plans to develop and publish non-binding model contract terms to help achieve this goal.
  • Cloud service providers must adopt portability measures that permit consumers and businesses to move data and applications to another provider without incurring and costs. The Act also mandates implementation of safeguards to protect data held in cloud infrastructures in the EU.
  • Customers shall have the right to transfer data from one data processor to another, free of commercial, technical, contractual or organizational obstacles.
  • Businesses shall provide certain data to public sector bodies in exceptional situations (e.g., public emergencies), under key conditions.
  • Cloud service providers will be subject to certain restrictions on international data sharing or access.
  • The content of certain databases resulting from data generated or obtained by connected devices will be protected.

Next Steps

The proposed Regulation is designed to stimulate competition and create opportunities for data-driven innovation as part of the EU’s data strategy. In doing so, it complements the Data Governance Act, which facilitates data sharing across sectors and Members states. As the EU continues to strengthen its data strategy, U.S. businesses will want to monitor this space and consider preliminary steps towards potential compliance. The Regulation will apply to U.S. manufacturers and service providers who place connected objects and related services in the EU market. Compliance will necessitate appropriate policies, procedures, and mechanisms to meet the Regulation’s transparency, access, data minimization and safeguards mandates. At a minimum, this will involve designing and manufacturing products and services that incorporate user access mechanisms and protections by design and default.

Welcome to Utah - Life Elevated - Welcome Signs on Waymarking.comJust as businesses are preparing to ensure compliance with similar laws in California, Colorado, and Virginia, they soon will need to consider a fourth jurisdiction, Utah. On March 24, 2022, Governor Spencer Cox signed a measure enacting the Utah Consumer Privacy Act (UCPA). The UCPA is set to take effect December 31, 2023. Note, Georgia and Massachusetts may be the next states to enact similar laws.

Key Elements

Again, as with the Colorado Privacy Act (CPA) and the Virginia Consumer Data Privacy Act (VCDPA), UCPA was modeled in part on the CCPA, CPRA, and the EU General Data Protection Regulation (GDPR). But there are some variations. Key elements of the UCPA include:

  • Jurisdictional Scope. The UCPA apples to controllers or processors that
    • conduct business in Utah or produce a product or service that is targeted to consumers who reside in Utah; and
    • have annual revenue of $25 million or more; and
    • satisfy one or more of the following: (i) during a calendar year, control or process personal data of at least 100,000 consumers, or (ii) control or process personal data of at least 25,000 consumers and derive over 50 percent of gross revenue from the sale of personal data.

Notably, as indicated above, it is not required that a controller be located in Utah to be subject to the UCPA.

 

  • Exemptions. The UCPA has a long line of entities and data to which the law does not apply. Although not an exhaustive list, some examples of excluded entities include governmental entities and their contractors when working on their behalf, tribes, non-profit corporations, institutions of higher education, HIPAA covered entities and business associates, and financial institutions. The UCPA also excludes certain categories of personal information, such as protected health information under HIPAA, identifiable private information involved in certain human subject research, deidentified information, and personal data regulated by FERPA. The UCPA also exempts personal data processed or maintained in the course of an individual applying to, being employed by, or acting as an agent or independent contractor of a controller, processor, or third party, to the extent that collection and use of the data are related to the individual’s role. This last exemption generally includes employee and applicant data, including the administration of benefits for individuals relating to employees.

 

  • Personal Data. Using a simpler definition than the CCPA/CPRA, the UCPA defines personal data to mean, “information that is linked or reasonably linkable to an identified individual or an identifiable individual.”

 

  • Sensitive Data. Like both the GDPR and the CPRA, the UCPA addresses a subset of personal data referred to as “sensitive data.” This is defined as personal data that reveals such items as racial or ethnic origin (unless processed by a video communication service); religious beliefs; medical history, mental or physical health, and medical treatment (unless processed by certain health care providers); sexual orientation, or citizenship or immigration status. This category of personal data also includes genetic and biometric data, as well as geolocation data. In general, controllers may not process sensitive data without providing clear notice and an opportunity to opt-out.

 

  • Consumer. A “consumer” under the UCPA is “an individual who is a resident of Utah acting in an individual or household context.” Like the VCDPA, Utah’s law states a consumer does not include a “natural person acting in a commercial or employment context.”

 

  • Consumer Rights. Subject to the exemptions and other limitations set forth under the law, Utah residents will be afforded the following rights with respect to their personal data:
    • To confirm whether or not a controller is processing their personal data and to access such personal data;
    • To delete personal data that the consumer provided to the controller. It is unclear whether this includes data provided to a processor or other third party with respect to the controller;
    • To obtain a copy of their personal data that they previously provided to the controller in a portable and readily usable format that allows them to transmit the data to another controller without impediment, where the processing is carried out by automated means; and
    • To opt out of the processing of the personal data for purposes of (i) targeted advertising, or (ii) sale.

 

  • Controllers. Similar to the CCPA/CPRA, CPA, and VCDPA, controllers must provide an accessible and clear privacy notice that includes, among other things, the categories of personal data collected by the controller and how consumers may exercise a right with respect to their personal data. As with the CPRA, controllers are required to establish, implement, and maintain reasonable administrative, physical, and technical safeguards.

 

  • ProcessorsProcessors are persons that “process” (collect, use, store, disclose analyze, delete, or modify) personal information on behalf of controllers. Before processors may do so, they must enter into a contract that (i) clearly sets forth instructions for processing personal data, the nature and purpose of the processing, the type of data subject to processing, the duration of the processing, and the parties’ rights and obligations; (ii) requires the processor to ensure each person processing personal data is subject to a duty of confidentiality with respect to the personal data; and (iii) requires the processor to engage any subcontractor pursuant to a written contract that requires the subcontractor to meet the same obligations as the processor with respect to the personal data. Businesses with consumers in multiple states will have to compare these required provisions against those required under the CPRA, CPA, and VCDPA, as well as other privacy and security frameworks that may be applicable.

 

  • Enforcement. The Utah Attorney General’s office has exclusive enforcement over the UCPA. In addition, a controller or processor must be provided 30 days’ written notice of any violation, allowing the entity the opportunity to cure the violation. Failure to cure the violation allows the Attorney General to recover actual damages to the consumer and a fine of up to $7,500 per violation. A private right of action is not available under the UCPA.

Takeaway

States across the country are contemplating ways to enhance their data privacy and security protections. Accordingly, organizations, regardless of their location, should be assessing and reviewing their data collection activities, building robust data protection programs, and investing in written information security programs.

The FTC recently settled its enforcement action involving data privacy and security allegations against an online seller of customized merchandise. In addition to agreeing to pay $500,000, the online merchant consented to multiyear compliance, recordkeeping, and FTC reporting requirements. The essence of the FTC’s seven count Complaint is that the merchant failed to properly disclose a data breach, misrepresented is data privacy and security practices, and did not maintain reasonable data security practices.

The federal consumer protection agency has broad enforcement authority under Section 5 of the Federal Trade Commission Act (FTC Act) which prohibits ”unfair or deceptive acts or practices in or affecting commerce.” This enforcement action follows other recent FTC actions on similar issues, suggesting the agency ramping up consistent with the overall direction of the Biden Administration concerning cybersecurity. There are steps organizations can take to minimize FTC scrutiny, and one place to start might be website disclosures, perhaps in connection with addressing the imminent website privacy compliance obligations under the California Privacy Rights Act.

In reviewing the FTC enforcement action in this matter, it is interesting to see what the agency considered personal information:

names, email addresses, telephone numbers, birth dates, gender, photos, social media handles, security questions and answers, passwords, PayPal addresses, the last four digits and expiration dates of credit cards, and Social Security or tax identification numbers

Some are obvious, some not so much.

The FTC also examined the merchant’s public disclosures concerning privacy and security of personal information, including from its website privacy policy, as well as email responses to customers and checkout pages. Here’s an example:

[Company] also pledges to use the best and most accepted methods and technologies to insure [sic] your personal information is safe and secure

In addition, the agency pointed to practices its viewed as not providing reasonable security for personal information stored on a network, such as

  • Failing to implement “readily-available…low-cost protections,” against “well-known and reasonably foreseeable vulnerabilities,” such as “Structured Query Language” (“SQL”) injection, Cascading Style Sheets (“CSS”) and HTML injection, etc.
  • Storing personal information such as Social Security numbers and security questions and answers in clear, readable text
  • Using the SHA-1 hashing algorithm to protect passwords, a method deprecated by the National Institute of Standards and Technology in 2011
  • Failing to maintain a process for receiving and addressing security vulnerability reports from third-party researchers, academics, or other members of the public
  • Not implementing patch management policies and procedures to ensure the timely remediation of critical security vulnerabilities
  • Maintaining lax password policies that allows, for example, users to select the same word, including common dictionary words, as both the password and user ID
  • Storing personal information indefinitely on a network without a business need
  • Failing to log sufficient information to adequately assess cybersecurity events
  • Failing to comply with existing written security policies
  • Failing to reasonably respond to security incidents, including timely disclosure of security incidents
  • Not adequately assessing the extent of and remediate malware infections after learning that devices on the network were infected with malware

The above list (including the additional items listed in the Complaint and the Consent Order) provide valuable insights into what measures the FTC might expect be in place to secure personal information.

The FTC also scrutinized the merchant’s disclosures on its website concerning the EU-U.S. Privacy Shield, alleging it failed to comply with some of the representations made in those disclosures. This aspect of the FTC’s enforcement action is notable because the agency acknowledged that the Privacy Shield had been invalidated by a decision of the European Court of Justice on July 16, 2020. But the FTC made clear that even if the Privacy Shield was determined to be insufficient under GDPR to permit the lawful transfer of personal data from the EU to the U.S., the merchant nonetheless represented that it would comply with the provisions of that framework.

The agreement reached in the Consent Order requires the merchant to take several steps, such as:

  • WISP. Within 60 days of the order, establish and implement a comprehensive written information security program (WISP) that protects the privacy, security, confidentiality, and integrity of personal information. To meet this requirement, the merchant must, among other things, (i) provide the WISP to its board or senior management every 12 months and not more than 30 days after a security incident, (ii) implement a range of specific safeguards and controls such as encryption, MFA, annual training, etc., (iii) consult with third-party experts concerning the WISP, and (iv) evaluate the capability of third party service providers to safeguard personal information and contractually require them to do so.
  • Independent WISP Assessment. The merchant must obtain independent third-party assessments of its WISP. The reporting period for these assessments is the first 180 days after the Consent Order, and each two-year period for 20 years following the Order.

To help survive FTC scrutiny, it is not enough to maintain reasonable safeguards to protect personal information. Companies also must ensure the statements that they make about those safeguards are consistent with the practices that they maintain. This includes statements in website privacy policies, customer receipts, and other correspondence. Additionally, companies must fully investigate inappropriately respond to potential security incidents that may have caused or could lead to in the future unauthorized access or acquisition of personal information.

Included within the Consolidated Appropriations Act, 2022, signed by President Joe Biden on March 15, the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (Act) creates new data breach reporting requirements. This new mandate furthers the federal government’s efforts to improve the nation’s cybersecurity, spurred at least in part by the Colonial Pipeline cyberattack that snarled the flow of gas on the east coast for days and the SolarWinds attack.  It’s likely the threat of increasing cyberattacks from Russia in connection with its war effort in Ukraine also was front of mind for Congress and the President when enacting this law.

In short, the Act requires certain entities in the critical infrastructure sector to report to the Department of Homeland Security (DHS):

  1. a covered cyber incident not later than 72 hours after the covered entity reasonably believes the incident occurred, and
  2. any ransom payment within 24 hours of making the payment as a result of a ransomware attack (even if the ransomware attack is not a covered cyber incident to be reported in i. above)

Supplemental reporting also is required if substantial new or different information becomes available and until the covered entity notifies DHS that the incident has concluded and has been fully mitigated and resolved. Additionally, covered entities must preserve information relevant to covered cyber incidents and ransom payments according to rules to be issued by the Director of the Cybersecurity and Infrastructure Security Agency (Director).

The effective date of these requirements, along with the time, manner, and form of the reports, among other items, will be set forth in rules issued by the Director. The Director has 24 months to issue a notice of proposed rulemaking, and 18 months after that to issue a final rule.

Some definitions are helpful.

  • Covered entities. The Act covers entities in a critical infrastructure sector, as defined in Presidential Policy Directive 21, that meet the definition to be established by the Director. Examples of these sectors include critical manufacturing, energy, financial services, food and agriculture, healthcare, information technology, and transportation. In further defining covered entities, the Director will consider factors such as the consequences to national and economic security that could result from compromising an entity, whether the entity is a target of malicious cyber actors, and whether access to such an entity could enable disruption of critical infrastructure.
  • Covered cyber incidents. Reporting under the Act will be required for “covered cyber incidents.” Borrowing in part from Section 2209(a)(4) of Title XXII of the Homeland Security Act of 2002, a cyber incident under the Act generally means an occurrence that jeopardizes, without lawful authority, the integrity, confidentiality, or availability of information on an information system, or an information system. To be covered under the Act, the cyber incident must be a “substantial cyber incident” experienced by a covered entity as further defined by the Director.
  • Information systems. An information system means a “discrete set of information resources organized for the collection, processing, maintenance, use, sharing, dissemination, or disposition of information” which includes industrial control systems, such as supervisory control and data acquisition systems, distributed control systems, and programmable logic controllers.
  • Ransom payment. A ransom payment is the transmission of any money or other property or asset, including virtual currency, or any portion thereof, which has at any time been delivered as ransom in connection with a ransomware attack.

A report of a covered cyber incident will need to include: Continue Reading Cyber Incident, Ransom Payment Reporting to DHS Mandatory for Critical Infrastructure Entities

According to Giving USA, charitable contributions in 2020 exceeded $470 billion, 70 percent of which came from individuals.  Individuals deciding to donate to a particular organization may be considering factors beyond the organization’s particular mission, however compelling it may be. Misleading GoFundMe campaigns, FTC crackdowns on deceptive charities, and poorly run organizations are some of the reasons for increased scrutiny. One more reason is concern over how not-for-profit and charitable organizations handle donor personal information.

According to some reports, a third of donors perform research before donating. To assist these donors, several third-party rating sites, such as Charity Navigator, the Wise Giving Alliance, and CharityWatch, do much of the legwork for donors. They collect large amounts of data about these organizations, such as financial position, use of donated funds, corporate governance, transparency, and other practices. They obtain most of that data from the organizations’ Forms 990 and websites, where many organizations publish privacy policies.

Rating sites such as Charity Navigator base their ratings on comprehensive methodologies. A significant component of Charity Navigator’s rating, for example, relates to accountability and transparency, made up of 17 categories. A review of an organization’s website informs five of those 17 categories, namely (i) board members listed, (ii) key staff listed, (iii) audited financials published, (iv) Form 990 published, and (v) privacy policy content. Charity Navigator explains why it considers website privacy policies and which policies receive the highest rating:

Donors can be reluctant to contribute to a charity when their name, address, or other basic information may become part of donor lists that are exchanged or sold, resulting in an influx of charitable solicitations from other organizations. Our analysts check the charity’s website to see if the organization has a donor privacy policy in place and what it does and does not cover.  Privacy policies are assigned to one of the following categories:

Yes: This charity has a written donor privacy policy published on its website, which states unambiguously that (1) it will not share or sell a donor’s information with anyone else, nor send donor mailings on behalf of other organizations or (2) it will only share or sell personal information once the donor has given the charity specific permission to do so.

Opt-out: The charity has a written privacy policy published on its website which enables donors to tell the charity to remove their names and contact information from lists the charity shares or sells. How a donor can have themselves removed from a list differs from one charity to the next, but any and all opt-out policies require donors to take specific action.

No: This charity either does not have a written donor privacy policy in place or the existing policy does not meet our criteria for protecting contributors’ personal information.

The privacy policy must be specific to donor information. A general website policy which references “visitor” or “user” personal information is insufficient. A policy that refers to donor information collected on the website is also not sufficient as the policy must be comprehensive and applicable to both online and offline donors. The existence of a privacy policy of any type does not prohibit the charity itself from contacting the donor for informational, educational, or solicitation purposes.

Regulatory compliance obligations for websites have expanded in recent years, with privacy policies among the requirements. Even a website compliant with applicable regulation, however, may not derive an optimal score with Charity Navigator. For example, in many cases, website privacy statements need only apply to data collected on the website, not elsewhere at the organization. Also, website regulations do not require donors be specifically addressed. The point reduction for non-conforming privacy policies is relatively small for Charity Navigator, but can have an impact. Rating company CharityWatch reports on privacy policies “as an informational benchmark” but does not factor that information into its ratings.

The extent to which donors might direct their charitable dollars away from organizations without optimal ratings on privacy is unclear. At the same time, quickly posting a privacy policy to enhance a third-party rating in the hope of driving additional donors is probably not a prudent response. Not-for-profit, charitable organizations want to be sure their website privacy policies are compliant and consistent with their practices involving data, while also positioning them well to maximize donations in support of their mission. Drafting and maintaining these policies takes considerable care and attention.

According to a recent survey, about 45% of companies do not have a Chief Information Security Officer (CISO). As West Monroe’s “The Importance of a CISO” observes, it would be terrific for all organizations to have a CISO, but that simply may not be practical for some, particularly smaller organizations. Recent internal audit guidance issued by the federal Department of Labor (DOL), however, directs its investigators to verify the designation of a CISO when auditing retirement plans.

Nearly a year ago, on April 14, the DOL issued cyber security guidance for retirement plans (Guidance). Shortly thereafter, the Department began to weave its newly-minted cybersecurity guidance into plan audits. Basically, the Guidance has three prongs:

  • Cybersecurity best practices for the plan and their service providers
  • Exercise of prudence as an ERISA fiduciary when selecting service providers with respect to cybersecurity practices
  • Educating plan participants and beneficiaries on basic rules to reduce risk of fraud or loss to their retirement plan accounts

The DOL offers 12 helpful “best practices” for any cybersecurity program. Number four on its list provides:

  1. Clearly Defined and Assigned Information Security Roles and Responsibilities. For a cybersecurity program to be effective, it must be managed at the senior executive level and executed by qualified personnel. As a senior executive, the Chief Information Security Officer (CISO) would generally establish and maintain the vision, strategy, and operation of the cybersecurity program which is performed by qualified personnel who should meet the following criteria:
    • Sufficient experience and necessary certifications.
    • Initial and periodic background checks.
    • Regular updates and training to address current cybersecurity risks.
    • Current knowledge of changing cybersecurity threats and countermeasures

Currently, DOL personnel who conduct retirement plan audits are likely to be very familiar with the full range of ERISA requirements for retirement plans. Until recently, however, the DOL had not made clear that cybersecurity was one of those requirements. In an effort to assist its investigators when auditing such plans, the agency provided an investigative guide that closely tracks the Guidance, and offers investigators suggestions for practices to look for during the cybersecurity audit. With regard to number four above, the investigative guide urges investigators to:

Look for:

    • Evidence verifying the designation of a senior leader as the Chief Information Security Officer (CISO) and demonstrating the CISO’s qualifications and accountability for the management, implementation, and evaluation of the cybersecurity program.

As DOL investigators grapple with applying the Guidance along with their internal resources, it remains unclear whether they will be fixated on requiring in all cases an express designation of a “CISO” by all retirement plan sponsors and plan service providers. Of course, it will be important for organizations to clearly define and assign information security roles and responsibilities. The lack of a “CISO” designation alone should not necessarily mean an organization’s data security efforts are rudderless.

Persons in positions such as Director of IT, Chief Information Officer, or IT manager, all may help to support the organization’s efforts to maintain the privacy and security of plan data. But their roles and expertise may not be sufficient to fully address data security for the organization, the plan, or its service providers. For instance, persons in these positions may be appropriately focused on the organization’s IT systems and equipment for which security is only one issue. While these roles are important as well, the focus should be to make sure there is qualified senior leadership with information security roles and responsibilities. The West Monroe article above identifies nicely the attributes such senior leadership might have to fill this need:

  • Executive Presence: The [leader] should have the executive presence to effectively represent the organization’s position regarding information security and the ability to influence executives. They need to be able to identify and assess threats, and then translate the risks into language executives can understand
  • Business Knowledge: The [leader] needs to understand business operations and the critical data that organization is trying to protect. She needs to view business operations from a risk versus security perspective and implement controls to minimize risks and business disruptions.
  • Security Knowledge: A [leader] must be capable of understanding complex security configurations and reports from the technical perspective, and then be capable of translating the relevant technical details into language that other executives can understand.

This raises an important question for many organizations struggling to address cybersecurity, and not just for their retirement plans – how does the organization assess the qualifications of candidates for such a position, and then the individual(s)’ performance when in the position(s). Another important question, suggested above, is whether smaller organizations can support a position with this level of expertise and qualifications. The DOL’s investigative guide seems to acknowledge this issue:

For many plans – especially small plans – IT systems, data, and cybersecurity risks are chiefly managed by third-party recordkeepers and service providers, and these service providers are an appropriate focus for an investigation of cybersecurity practices.

In doing so, the DOL also brings into focus to the plan’s service providers.

The key takeaway is to think carefully about your organization’s approach to managing its cybersecurity obligations and requirements, including with respect to employee benefit plans. Organizations should have a qualified member of its senior leadership assigned and accountable for the management, implementation, and evaluation of its cybersecurity program.

Co-authors: Nadine C. Abrams and Richard Mrizek 

In a ruling that may have significant impact on the constant influx of biometric privacy suits under the Biometric Information Privacy Act (BIPA) in Illinois, the Illinois Supreme Court will soon weigh in on whether claims under Sections 15(b) and (d) of the BIPA, 740 ILCS 14/1, et seq., “accrue each time a private entity scans a person’s biometric identifier and each time a private entity transmits such a scan to a third party, respectively, or only upon the first scan and first transmission.” Adopting a “per-scan” theory of accrual or liability under the BIPA would lead to absurd and unjust results, argued a friend-of-the-court brief filed by Jackson Lewis in Cothron v. White Castle Systems, Inc., in the Illinois Supreme Court, on behalf of a coalition of trade associations whose more than 30,000 members employ approximately half of all workers in the State of Illinois.

To date, more than 1,450 class action lawsuits have been filed under BIPA. Businesses that collect, use, and store biometric data should be tracking the Cothron decision closely.  The full update on Jackson Lewis’s brief in the Cothron case before the Illinois Supreme Court is available here.