Welcome to Utah - Life Elevated - Welcome Signs on Waymarking.comJust as businesses are preparing to ensure compliance with similar laws in California, Colorado, and Virginia, they soon will need to consider a fourth jurisdiction, Utah. On March 24, 2022, Governor Spencer Cox signed a measure enacting the Utah Consumer Privacy Act (UCPA). The UCPA is set to take effect December 31, 2023. Note, Georgia and Massachusetts may be the next states to enact similar laws.

Key Elements

Again, as with the Colorado Privacy Act (CPA) and the Virginia Consumer Data Privacy Act (VCDPA), UCPA was modeled in part on the CCPA, CPRA, and the EU General Data Protection Regulation (GDPR). But there are some variations. Key elements of the UCPA include:

  • Jurisdictional Scope. The UCPA apples to controllers or processors that
    • conduct business in Utah or produce a product or service that is targeted to consumers who reside in Utah; and
    • have annual revenue of $25 million or more; and
    • satisfy one or more of the following: (i) during a calendar year, control or process personal data of at least 100,000 consumers, or (ii) control or process personal data of at least 25,000 consumers and derive over 50 percent of gross revenue from the sale of personal data.

Notably, as indicated above, it is not required that a controller be located in Utah to be subject to the UCPA.


  • Exemptions. The UCPA has a long line of entities and data to which the law does not apply. Although not an exhaustive list, some examples of excluded entities include governmental entities and their contractors when working on their behalf, tribes, non-profit corporations, institutions of higher education, HIPAA covered entities and business associates, and financial institutions. The UCPA also excludes certain categories of personal information, such as protected health information under HIPAA, identifiable private information involved in certain human subject research, deidentified information, and personal data regulated by FERPA. The UCPA also exempts personal data processed or maintained in the course of an individual applying to, being employed by, or acting as an agent or independent contractor of a controller, processor, or third party, to the extent that collection and use of the data are related to the individual’s role. This last exemption generally includes employee and applicant data, including the administration of benefits for individuals relating to employees.


  • Personal Data. Using a simpler definition than the CCPA/CPRA, the UCPA defines personal data to mean, “information that is linked or reasonably linkable to an identified individual or an identifiable individual.”


  • Sensitive Data. Like both the GDPR and the CPRA, the UCPA addresses a subset of personal data referred to as “sensitive data.” This is defined as personal data that reveals such items as racial or ethnic origin (unless processed by a video communication service); religious beliefs; medical history, mental or physical health, and medical treatment (unless processed by certain health care providers); sexual orientation, or citizenship or immigration status. This category of personal data also includes genetic and biometric data, as well as geolocation data. In general, controllers may not process sensitive data without providing clear notice and an opportunity to opt-out.


  • Consumer. A “consumer” under the UCPA is “an individual who is a resident of Utah acting in an individual or household context.” Like the VCDPA, Utah’s law states a consumer does not include a “natural person acting in a commercial or employment context.”


  • Consumer Rights. Subject to the exemptions and other limitations set forth under the law, Utah residents will be afforded the following rights with respect to their personal data:
    • To confirm whether or not a controller is processing their personal data and to access such personal data;
    • To delete personal data that the consumer provided to the controller. It is unclear whether this includes data provided to a processor or other third party with respect to the controller;
    • To obtain a copy of their personal data that they previously provided to the controller in a portable and readily usable format that allows them to transmit the data to another controller without impediment, where the processing is carried out by automated means; and
    • To opt out of the processing of the personal data for purposes of (i) targeted advertising, or (ii) sale.


  • Controllers. Similar to the CCPA/CPRA, CPA, and VCDPA, controllers must provide an accessible and clear privacy notice that includes, among other things, the categories of personal data collected by the controller and how consumers may exercise a right with respect to their personal data. As with the CPRA, controllers are required to establish, implement, and maintain reasonable administrative, physical, and technical safeguards.


  • ProcessorsProcessors are persons that “process” (collect, use, store, disclose analyze, delete, or modify) personal information on behalf of controllers. Before processors may do so, they must enter into a contract that (i) clearly sets forth instructions for processing personal data, the nature and purpose of the processing, the type of data subject to processing, the duration of the processing, and the parties’ rights and obligations; (ii) requires the processor to ensure each person processing personal data is subject to a duty of confidentiality with respect to the personal data; and (iii) requires the processor to engage any subcontractor pursuant to a written contract that requires the subcontractor to meet the same obligations as the processor with respect to the personal data. Businesses with consumers in multiple states will have to compare these required provisions against those required under the CPRA, CPA, and VCDPA, as well as other privacy and security frameworks that may be applicable.


  • Enforcement. The Utah Attorney General’s office has exclusive enforcement over the UCPA. In addition, a controller or processor must be provided 30 days’ written notice of any violation, allowing the entity the opportunity to cure the violation. Failure to cure the violation allows the Attorney General to recover actual damages to the consumer and a fine of up to $7,500 per violation. A private right of action is not available under the UCPA.


States across the country are contemplating ways to enhance their data privacy and security protections. Accordingly, organizations, regardless of their location, should be assessing and reviewing their data collection activities, building robust data protection programs, and investing in written information security programs.

The FTC recently settled its enforcement action involving data privacy and security allegations against an online seller of customized merchandise. In addition to agreeing to pay $500,000, the online merchant consented to multiyear compliance, recordkeeping, and FTC reporting requirements. The essence of the FTC’s seven count Complaint is that the merchant failed to properly disclose a data breach, misrepresented is data privacy and security practices, and did not maintain reasonable data security practices.

The federal consumer protection agency has broad enforcement authority under Section 5 of the Federal Trade Commission Act (FTC Act) which prohibits ”unfair or deceptive acts or practices in or affecting commerce.” This enforcement action follows other recent FTC actions on similar issues, suggesting the agency ramping up consistent with the overall direction of the Biden Administration concerning cybersecurity. There are steps organizations can take to minimize FTC scrutiny, and one place to start might be website disclosures, perhaps in connection with addressing the imminent website privacy compliance obligations under the California Privacy Rights Act.

In reviewing the FTC enforcement action in this matter, it is interesting to see what the agency considered personal information:

names, email addresses, telephone numbers, birth dates, gender, photos, social media handles, security questions and answers, passwords, PayPal addresses, the last four digits and expiration dates of credit cards, and Social Security or tax identification numbers

Some are obvious, some not so much.

The FTC also examined the merchant’s public disclosures concerning privacy and security of personal information, including from its website privacy policy, as well as email responses to customers and checkout pages. Here’s an example:

[Company] also pledges to use the best and most accepted methods and technologies to insure [sic] your personal information is safe and secure

In addition, the agency pointed to practices its viewed as not providing reasonable security for personal information stored on a network, such as

  • Failing to implement “readily-available…low-cost protections,” against “well-known and reasonably foreseeable vulnerabilities,” such as “Structured Query Language” (“SQL”) injection, Cascading Style Sheets (“CSS”) and HTML injection, etc.
  • Storing personal information such as Social Security numbers and security questions and answers in clear, readable text
  • Using the SHA-1 hashing algorithm to protect passwords, a method deprecated by the National Institute of Standards and Technology in 2011
  • Failing to maintain a process for receiving and addressing security vulnerability reports from third-party researchers, academics, or other members of the public
  • Not implementing patch management policies and procedures to ensure the timely remediation of critical security vulnerabilities
  • Maintaining lax password policies that allows, for example, users to select the same word, including common dictionary words, as both the password and user ID
  • Storing personal information indefinitely on a network without a business need
  • Failing to log sufficient information to adequately assess cybersecurity events
  • Failing to comply with existing written security policies
  • Failing to reasonably respond to security incidents, including timely disclosure of security incidents
  • Not adequately assessing the extent of and remediate malware infections after learning that devices on the network were infected with malware

The above list (including the additional items listed in the Complaint and the Consent Order) provide valuable insights into what measures the FTC might expect be in place to secure personal information.

The FTC also scrutinized the merchant’s disclosures on its website concerning the EU-U.S. Privacy Shield, alleging it failed to comply with some of the representations made in those disclosures. This aspect of the FTC’s enforcement action is notable because the agency acknowledged that the Privacy Shield had been invalidated by a decision of the European Court of Justice on July 16, 2020. But the FTC made clear that even if the Privacy Shield was determined to be insufficient under GDPR to permit the lawful transfer of personal data from the EU to the U.S., the merchant nonetheless represented that it would comply with the provisions of that framework.

The agreement reached in the Consent Order requires the merchant to take several steps, such as:

  • WISP. Within 60 days of the order, establish and implement a comprehensive written information security program (WISP) that protects the privacy, security, confidentiality, and integrity of personal information. To meet this requirement, the merchant must, among other things, (i) provide the WISP to its board or senior management every 12 months and not more than 30 days after a security incident, (ii) implement a range of specific safeguards and controls such as encryption, MFA, annual training, etc., (iii) consult with third-party experts concerning the WISP, and (iv) evaluate the capability of third party service providers to safeguard personal information and contractually require them to do so.
  • Independent WISP Assessment. The merchant must obtain independent third-party assessments of its WISP. The reporting period for these assessments is the first 180 days after the Consent Order, and each two-year period for 20 years following the Order.

To help survive FTC scrutiny, it is not enough to maintain reasonable safeguards to protect personal information. Companies also must ensure the statements that they make about those safeguards are consistent with the practices that they maintain. This includes statements in website privacy policies, customer receipts, and other correspondence. Additionally, companies must fully investigate inappropriately respond to potential security incidents that may have caused or could lead to in the future unauthorized access or acquisition of personal information.

Included within the Consolidated Appropriations Act, 2022, signed by President Joe Biden on March 15, the Cyber Incident Reporting for Critical Infrastructure Act of 2022 (Act) creates new data breach reporting requirements. This new mandate furthers the federal government’s efforts to improve the nation’s cybersecurity, spurred at least in part by the Colonial Pipeline cyberattack that snarled the flow of gas on the east coast for days and the SolarWinds attack.  It’s likely the threat of increasing cyberattacks from Russia in connection with its war effort in Ukraine also was front of mind for Congress and the President when enacting this law.

In short, the Act requires certain entities in the critical infrastructure sector to report to the Department of Homeland Security (DHS):

  1. a covered cyber incident not later than 72 hours after the covered entity reasonably believes the incident occurred, and
  2. any ransom payment within 24 hours of making the payment as a result of a ransomware attack (even if the ransomware attack is not a covered cyber incident to be reported in i. above)

Supplemental reporting also is required if substantial new or different information becomes available and until the covered entity notifies DHS that the incident has concluded and has been fully mitigated and resolved. Additionally, covered entities must preserve information relevant to covered cyber incidents and ransom payments according to rules to be issued by the Director of the Cybersecurity and Infrastructure Security Agency (Director).

The effective date of these requirements, along with the time, manner, and form of the reports, among other items, will be set forth in rules issued by the Director. The Director has 24 months to issue a notice of proposed rulemaking, and 18 months after that to issue a final rule.

Some definitions are helpful.

  • Covered entities. The Act covers entities in a critical infrastructure sector, as defined in Presidential Policy Directive 21, that meet the definition to be established by the Director. Examples of these sectors include critical manufacturing, energy, financial services, food and agriculture, healthcare, information technology, and transportation. In further defining covered entities, the Director will consider factors such as the consequences to national and economic security that could result from compromising an entity, whether the entity is a target of malicious cyber actors, and whether access to such an entity could enable disruption of critical infrastructure.
  • Covered cyber incidents. Reporting under the Act will be required for “covered cyber incidents.” Borrowing in part from Section 2209(a)(4) of Title XXII of the Homeland Security Act of 2002, a cyber incident under the Act generally means an occurrence that jeopardizes, without lawful authority, the integrity, confidentiality, or availability of information on an information system, or an information system. To be covered under the Act, the cyber incident must be a “substantial cyber incident” experienced by a covered entity as further defined by the Director.
  • Information systems. An information system means a “discrete set of information resources organized for the collection, processing, maintenance, use, sharing, dissemination, or disposition of information” which includes industrial control systems, such as supervisory control and data acquisition systems, distributed control systems, and programmable logic controllers.
  • Ransom payment. A ransom payment is the transmission of any money or other property or asset, including virtual currency, or any portion thereof, which has at any time been delivered as ransom in connection with a ransomware attack.

A report of a covered cyber incident will need to include: Continue Reading Cyber Incident, Ransom Payment Reporting to DHS Mandatory for Critical Infrastructure Entities

According to Giving USA, charitable contributions in 2020 exceeded $470 billion, 70 percent of which came from individuals.  Individuals deciding to donate to a particular organization may be considering factors beyond the organization’s particular mission, however compelling it may be. Misleading GoFundMe campaigns, FTC crackdowns on deceptive charities, and poorly run organizations are some of the reasons for increased scrutiny. One more reason is concern over how not-for-profit and charitable organizations handle donor personal information.

According to some reports, a third of donors perform research before donating. To assist these donors, several third-party rating sites, such as Charity Navigator, the Wise Giving Alliance, and CharityWatch, do much of the legwork for donors. They collect large amounts of data about these organizations, such as financial position, use of donated funds, corporate governance, transparency, and other practices. They obtain most of that data from the organizations’ Forms 990 and websites, where many organizations publish privacy policies.

Rating sites such as Charity Navigator base their ratings on comprehensive methodologies. A significant component of Charity Navigator’s rating, for example, relates to accountability and transparency, made up of 17 categories. A review of an organization’s website informs five of those 17 categories, namely (i) board members listed, (ii) key staff listed, (iii) audited financials published, (iv) Form 990 published, and (v) privacy policy content. Charity Navigator explains why it considers website privacy policies and which policies receive the highest rating:

Donors can be reluctant to contribute to a charity when their name, address, or other basic information may become part of donor lists that are exchanged or sold, resulting in an influx of charitable solicitations from other organizations. Our analysts check the charity’s website to see if the organization has a donor privacy policy in place and what it does and does not cover.  Privacy policies are assigned to one of the following categories:

Yes: This charity has a written donor privacy policy published on its website, which states unambiguously that (1) it will not share or sell a donor’s information with anyone else, nor send donor mailings on behalf of other organizations or (2) it will only share or sell personal information once the donor has given the charity specific permission to do so.

Opt-out: The charity has a written privacy policy published on its website which enables donors to tell the charity to remove their names and contact information from lists the charity shares or sells. How a donor can have themselves removed from a list differs from one charity to the next, but any and all opt-out policies require donors to take specific action.

No: This charity either does not have a written donor privacy policy in place or the existing policy does not meet our criteria for protecting contributors’ personal information.

The privacy policy must be specific to donor information. A general website policy which references “visitor” or “user” personal information is insufficient. A policy that refers to donor information collected on the website is also not sufficient as the policy must be comprehensive and applicable to both online and offline donors. The existence of a privacy policy of any type does not prohibit the charity itself from contacting the donor for informational, educational, or solicitation purposes.

Regulatory compliance obligations for websites have expanded in recent years, with privacy policies among the requirements. Even a website compliant with applicable regulation, however, may not derive an optimal score with Charity Navigator. For example, in many cases, website privacy statements need only apply to data collected on the website, not elsewhere at the organization. Also, website regulations do not require donors be specifically addressed. The point reduction for non-conforming privacy policies is relatively small for Charity Navigator, but can have an impact. Rating company CharityWatch reports on privacy policies “as an informational benchmark” but does not factor that information into its ratings.

The extent to which donors might direct their charitable dollars away from organizations without optimal ratings on privacy is unclear. At the same time, quickly posting a privacy policy to enhance a third-party rating in the hope of driving additional donors is probably not a prudent response. Not-for-profit, charitable organizations want to be sure their website privacy policies are compliant and consistent with their practices involving data, while also positioning them well to maximize donations in support of their mission. Drafting and maintaining these policies takes considerable care and attention.

According to a recent survey, about 45% of companies do not have a Chief Information Security Officer (CISO). As West Monroe’s “The Importance of a CISO” observes, it would be terrific for all organizations to have a CISO, but that simply may not be practical for some, particularly smaller organizations. Recent internal audit guidance issued by the federal Department of Labor (DOL), however, directs its investigators to verify the designation of a CISO when auditing retirement plans.

Nearly a year ago, on April 14, the DOL issued cyber security guidance for retirement plans (Guidance). Shortly thereafter, the Department began to weave its newly-minted cybersecurity guidance into plan audits. Basically, the Guidance has three prongs:

  • Cybersecurity best practices for the plan and their service providers
  • Exercise of prudence as an ERISA fiduciary when selecting service providers with respect to cybersecurity practices
  • Educating plan participants and beneficiaries on basic rules to reduce risk of fraud or loss to their retirement plan accounts

The DOL offers 12 helpful “best practices” for any cybersecurity program. Number four on its list provides:

  1. Clearly Defined and Assigned Information Security Roles and Responsibilities. For a cybersecurity program to be effective, it must be managed at the senior executive level and executed by qualified personnel. As a senior executive, the Chief Information Security Officer (CISO) would generally establish and maintain the vision, strategy, and operation of the cybersecurity program which is performed by qualified personnel who should meet the following criteria:
    • Sufficient experience and necessary certifications.
    • Initial and periodic background checks.
    • Regular updates and training to address current cybersecurity risks.
    • Current knowledge of changing cybersecurity threats and countermeasures

Currently, DOL personnel who conduct retirement plan audits are likely to be very familiar with the full range of ERISA requirements for retirement plans. Until recently, however, the DOL had not made clear that cybersecurity was one of those requirements. In an effort to assist its investigators when auditing such plans, the agency provided an investigative guide that closely tracks the Guidance, and offers investigators suggestions for practices to look for during the cybersecurity audit. With regard to number four above, the investigative guide urges investigators to:

Look for:

    • Evidence verifying the designation of a senior leader as the Chief Information Security Officer (CISO) and demonstrating the CISO’s qualifications and accountability for the management, implementation, and evaluation of the cybersecurity program.

As DOL investigators grapple with applying the Guidance along with their internal resources, it remains unclear whether they will be fixated on requiring in all cases an express designation of a “CISO” by all retirement plan sponsors and plan service providers. Of course, it will be important for organizations to clearly define and assign information security roles and responsibilities. The lack of a “CISO” designation alone should not necessarily mean an organization’s data security efforts are rudderless.

Persons in positions such as Director of IT, Chief Information Officer, or IT manager, all may help to support the organization’s efforts to maintain the privacy and security of plan data. But their roles and expertise may not be sufficient to fully address data security for the organization, the plan, or its service providers. For instance, persons in these positions may be appropriately focused on the organization’s IT systems and equipment for which security is only one issue. While these roles are important as well, the focus should be to make sure there is qualified senior leadership with information security roles and responsibilities. The West Monroe article above identifies nicely the attributes such senior leadership might have to fill this need:

  • Executive Presence: The [leader] should have the executive presence to effectively represent the organization’s position regarding information security and the ability to influence executives. They need to be able to identify and assess threats, and then translate the risks into language executives can understand
  • Business Knowledge: The [leader] needs to understand business operations and the critical data that organization is trying to protect. She needs to view business operations from a risk versus security perspective and implement controls to minimize risks and business disruptions.
  • Security Knowledge: A [leader] must be capable of understanding complex security configurations and reports from the technical perspective, and then be capable of translating the relevant technical details into language that other executives can understand.

This raises an important question for many organizations struggling to address cybersecurity, and not just for their retirement plans – how does the organization assess the qualifications of candidates for such a position, and then the individual(s)’ performance when in the position(s). Another important question, suggested above, is whether smaller organizations can support a position with this level of expertise and qualifications. The DOL’s investigative guide seems to acknowledge this issue:

For many plans – especially small plans – IT systems, data, and cybersecurity risks are chiefly managed by third-party recordkeepers and service providers, and these service providers are an appropriate focus for an investigation of cybersecurity practices.

In doing so, the DOL also brings into focus to the plan’s service providers.

The key takeaway is to think carefully about your organization’s approach to managing its cybersecurity obligations and requirements, including with respect to employee benefit plans. Organizations should have a qualified member of its senior leadership assigned and accountable for the management, implementation, and evaluation of its cybersecurity program.

Co-authors: Nadine C. Abrams and Richard Mrizek 

In a ruling that may have significant impact on the constant influx of biometric privacy suits under the Biometric Information Privacy Act (BIPA) in Illinois, the Illinois Supreme Court will soon weigh in on whether claims under Sections 15(b) and (d) of the BIPA, 740 ILCS 14/1, et seq., “accrue each time a private entity scans a person’s biometric identifier and each time a private entity transmits such a scan to a third party, respectively, or only upon the first scan and first transmission.” Adopting a “per-scan” theory of accrual or liability under the BIPA would lead to absurd and unjust results, argued a friend-of-the-court brief filed by Jackson Lewis in Cothron v. White Castle Systems, Inc., in the Illinois Supreme Court, on behalf of a coalition of trade associations whose more than 30,000 members employ approximately half of all workers in the State of Illinois.

To date, more than 1,450 class action lawsuits have been filed under BIPA. Businesses that collect, use, and store biometric data should be tracking the Cothron decision closely.  The full update on Jackson Lewis’s brief in the Cothron case before the Illinois Supreme Court is available here.




Some members of the California legislature want their state to remain the leader for data privacy and cybersecurity regulation in the U.S. This includes protections for biometric information, similar to those under the Biometric Information Privacy Act in Illinois, 740 ILCS 14 et seq. (BIPA). State Senator Bob Wieckowski introduced SB 1189 on February 17, 2022, which would add protections for biometric information in his state on top of other statutory provisions, such as the California Privacy Rights Act (CPRA) which goes into effect January 1, 2023.

If enacted, SB 1189 would significantly expand privacy and security protection for biometric information in California and likely influence additional legislative activity in the U.S. Notably, unlike some of the limitations on application in the California Consumer Privacy Act (CCPA), the Bill would apply to any private entity (defined as an individual, partnership, corporation, limited liability company, association, or similar group, however organized, other than the University of California). It could also open the door to a wave of litigation, similar to what organizations subject to the BIPA currently face.

SB 1189 includes a fairly broad definition of biometric information, tracking the definition under the CCPA that went into effect January 1, 2020:

(1) “Biometric information” means a person’s physiological, biological, or behavioral characteristics, including information pertaining to an individual’s deoxyribonucleic acid (DNA), that can be used or is intended to be used, singly or in combination with each other or with other identifying data, to establish individual identity.

(2) Biometric information includes, but is not limited to, imagery of the iris, retina, fingerprint, face, hand, palm, vein patterns, and voice recordings, from which an identifier template, such as a faceprint, a minutiae template, or a voiceprint, can be extracted, and keystroke patterns or rhythms, gait patterns or rhythms, and sleep, health, or exercise data that contain identifying information.

Many are familiar with or have encountered devices that scan fingerprints or a person’s face which may capture or create biometric information. This definition appears to go beyond those more “traditional” technologies. So, for example, if you’ve developed a unique style for tapping away at your keyboard while at work, you might be creating biometric information. The contours of this definition are quite vague, so private entities should carefully consider the capturing of certain data sets and the capabilities of new devices, systems, equipment, etc.

The Bill would prohibit private entities from collecting, capturing, purchasing, etc. a person’s biometric information unless the private entity:

  • requires the biometric information either to: (i) provide a service requested or authorized by the subject of the biometric information, or (ii) satisfy another valid business purpose (as defined in the CCPA) which is included in the written public policy described below, AND
  • first (i) informs the person or their legally authorized representative, in writing, of both of the biometric information being collected, stored, or used, and the specific purpose and length of time for which the biometric information is being collected, stored, or used, and (ii) receives a written release executed by the subject of the biometric information or their legally authorized representative.

In this regard, SB 1189 looks a lot like the BIPA, with some additional requirements for the written release. For example, the written release may not be combined with an employment contract or another consent form.

Under SB 1189, private entities in possession of biometric information also would be required to develop and make available to the public a written policy that establishes a retention schedule and guidelines for destroying biometric information. In general, destruction of the information would be required no later than one year after the individual’s last intentional interaction with the private entity. This is similar to the period required in the Texas biometric law.

In addition to requiring reasonable safeguards to protect biometric information, the Bill would place limitations on the disclosure of biometric information. Unless disclosed to complete a financial transaction requested by the data subject or disclosed as required by law, a written release would be required to disclose biometric information. The release would need to indicate the data to be disclosed, the reason for the disclosure, and the intended recipients.

Perhaps the most troubling provision of the Bill for private entities is section 1798.306. Again, looking a lot like the BIPA, SB 1189 would establish a private right of action permitting individuals to allege a violation of the law and bring a civil action for any of the following:

  • The greater of (i) statutory damages between $100 and $1,000 per violation per day, and (ii) actual damages.
  • Punitive damages.
  • Reasonable attorney’s fees and litigation costs.
  • Any other relief, including equitable or declaratory relief, that the court determines appropriate.

Though still early in the legislative process for SB 1189, its introduction illustrates a continued desire by state and local lawmakers to enact protections for biometric information. See, e.g., recent developments in New York, Maryland, and Oregon described in our Biometric Law Map. Before implementing technologies or systems that might involve biometric information, private entities need to carefully consider the emerging legislative landscape.

On January 24, 2022, New York Attorney General Letitia James announced a $600,000 settlement agreement with EyeMed Vision Care, a vision benefits company, stemming from a 2020 data breach compromising the personal information of approximately 2.1 million individuals across the United States, including nearly 99,000 in New York State (the “Incident”).

This settlement was the result of an enforcement action brought by the NY Attorney General under New York’s Stop Hacks and Improve Electronic Data Security Act (“SHIELD Act”). Enacted in 2019, the SHIELD Act aims to strengthen protections for New York residents against data breaches affecting their private information.   The SHIELD Act imposes expansive data security obligations and updated New York’s existing data breach notification requirements.  Our SHIELD Act FAQs are available here.

Notably, EyeMed found itself in the AG’s crosshairs not because of what it did after discovering the Incident, but instead because of what it failed to do beforehand.  Specifically, the AG alleged that, pre-Incident, EyeMed had not maintained reasonable safeguards in the areas of authentication, password management, logging and monitoring, and data retention.  The AG also alleged that EyeMed’s privacy policy had misrepresented the extent to which it protected the privacy, security, confidentiality, and integrity of personal information.

Based on these findings, the AG successfully secured—in addition to the $600,000 payment—EyeMed’s agreement to maintain a written information security program.  This program must include, at minimum, policies and procedures related to password management, authentication and account management, encryption, penetration testing, logging and monitoring, and data retention.  EyeMed is required to review this program annually and to provide training to its workforce on compliance with the program’s requirements.

The EyeMed breach stemmed from a common form of cyberattack in which the bad actor gains access to certain of an organization’s email accounts—and to the sensitive data therein.  In EyeMed’s case, the bad actor accessed emails and attachments containing a wide range of PHI and PII, including:

  • Names;
  • Contact information, including addresses;
  • Dates of birth;
  • Account information, including identification numbers for health insurance accounts and vision insurance accounts;
  • Full or partial Social Security Numbers;
  • Medicaid and Medicare numbers;
  • Driver’s license or other government ID numbers;
  • Birth or marriage certificates;
  • Medical diagnoses and conditions; and
  • Medical treatment information.

EyeMed first became aware of the bad actor’s activities on July 1, 2020—one (1) week after the attacker initially gained access to EyeMed’s email account—and subsequently blocked the bad actor’s access to this account.  After conducting an internal investigation and engaging a forensic cybersecurity firm (through outside counsel), EyeMed determined that the bad actor may have exfiltrated documents and information from the account.  Beginning on September 28, 2020, EyeMed began notifying affected individuals and regulators about the breach, and offering them identity theft protection services.

The SHIELD Act is far-reaching.  It affects any business (including a small business) that holds private information of a New York resident—regardless of whether the organization does business in New York. Under the Act, individuals and businesses that collect computerized data, including private information about New York residents, must implement and maintain reasonable administrative, physical, and technical safeguards.

The fine and non-monetary requirements of the EyeMed settlement are significant and highlight the need for organizations to carefully craft—and regularly revisit—their written information security programs.  As the AG made clear when announcing this settlement, enforcing compliance with the SHIELD Act’s mandate that organizations maintain reasonable data security safeguards will be a focal point for her office moving forward.

The Massachusetts Information Privacy and Security Act (MIPSA) continues to advance through the state legislative process, and is now before the full legislature. While the Act has several hurdles to clear before becoming law, its notable for two reasons. First, the comprehensive nature of the MIPSA exemplifies the direction state data protection laws are heading in the absence of a comprehensive federal consumer data protection law. Second, given the borderless nature of e-commerce, the most robust state consumer data protection law will likely become the de facto national consumer data protection law, and the MIPSA may take that title. This post highlights significant portions of the current version of the Act.

Who is protected? 

The MIPSA protects the personal information of Massachusetts residents.

Who is subject to the MIPSA?

The Act applies to an entity that has annual global gross revenue in excess of 25 million dollars; determines the purposes and means of processing of the personal information of not less than 100,000 individuals; or is a data broker. In addition, the entity conducts business in the state, or if not physically present in the state, processes personal information in the context of offering of goods or services targeted at state residents or monitors the in-state behavior of residents. Where an entity does not otherwise meet these criteria, it may voluntarily certify to the state Attorney General that it is in compliance with and agrees to be bound by the MIPSA.

Are any entities exempt?

Massachusetts state agencies and government bodies, national securities associations and registered futures associations are exempt.

What data is protected?

MIPSA applies to the personal information of a Massachusetts resident, which is defined as information that identifies, relates to, describes, is reasonably capable of being associated with, or could reasonably be linked, directly or indirectly, with an identified or identifiable individual. Personal information does not include de-identified information or publicly available information. For the limited purposes of a sale, personal information also includes information that identifies, relates to, describes, is reasonably capable of being associated with, or could reasonably be linked, directly or indirectly, with an identified or identifiable household.

Does the Act include special protections for Sensitive Information?

The Act carves out heightened protections for sensitive information. These include the right to notice of collection and use, and the right to limit use and disclosure to purposes necessary to perform the services or provide the goods requested, and for other controller internal uses as authorized by the Act.

Sensitive information is personal information that reveals an individual’s racial or ethnic origin, religious beliefs, philosophical beliefs, union membership, citizenship, or immigration status. It also includes biometric information or genetic information that is processed for the purpose of uniquely identifying an individual; personal information concerning a resident’s mental or physical health diagnosis or treatment, sex life or sexual orientation; specific geolocation information; personal information from a child; a Social Security Number, driver’s license number, military identification number, passport number, or state-issued identification card number; and a financial account number, credit or debit card number, with or without any required security code, access code, personal identification number or password, that would permit access to an individual’s financial account.

Is any personal information exempt from the Act?

Protected health information under HIPAA is exempt as is certain data, information, and health records created under HIPAA and Massachusetts state law. Exempt data also includes data collected, processed, or regulated with respect to clinical trials, the Health Care Quality Improvement Act of 1986, the Patient Safety and Quality Improvement Act, FCRA, Driver’s Privacy Protection Act, FERPA, the Farm Credit Act, GLBA, COPPA, the Massachusetts Health Insurance Connector and Preferred Provider Arrangements.

Does the MIPSA apply to employee personal information or information collected in the B2B context?

The Act also exempts personal information collected and processed in the context of an individual acting as a job applicant to, an employee of, or an agent or independent contractor of a controller, processor, or third-party including emergency contact information and information used to administer benefits for another person relating to the individual.

Information collected and used in the course of an individual acting in a commercial context is exempt.

What are the controller’s obligations under the MIPSA?

The Act creates an affirmative obligation to implement appropriate technical and organizational safeguards to ensure the security of the information. In addition, the controller must have a lawful basis to process the personal information. Processing must be done in a fair and transparent manner, which includes providing appropriate privacy notices at or before the point of collection. The controller must collect personal information for an identified and legitimate purpose and processing should be limited to what is necessary to achieve the purpose. The information must be accurate and retained only as long as necessary to achieve the purpose for which it was collected. For processing that may involve a high risk of harm to individuals, the controller may be obligated to conduct a risk assessment. When engaging a processor, the controller must enter into a data processing agreement with the processor that contains mandated provisions designed to ensure the privacy and security of personal information.

What rights do protected individuals have?

Massachusetts residents have the right to know, access, port, delete and correct their personal information, subject to certain limitations. The Act also provides for the right to opt out of the sale of personal information and limit the use and disclosure of sensitive information in particular with respect to targeted advertising. The data controller is prohibited from discriminating against the individual for exercising any of these rights.

Can my organization be sued for violations of the law?

The MIPSA does not include a private cause of action for violations of the Act. However, the proposed bill also amends the state data breach notification law to provide residents with a private right of action where their personal information was subject to a data breach resulting from the entity’s failure to implement reasonable safeguards.

How will the law be enforced?

The state Attorney General is authorized to commence a civil investigation when there is reasonable cause to believe an entity has engaged in, is engaging in, or is about to engage in a violation of the Act. After notice, the entity will have 30 days to cure the violation. In the event the entity fails to cure, the Attorney General may seek a temporary restraining order, preliminary injunction, or permanent injunction to restrain any violations r and may seek civil penalties of up to $7,500 for each violation.

Next steps?

The MIPSA sets a high bar for data protection practices. Whether enacted in whole or part, the Act provides a road map for where data protection laws are headed. Many of the 2022 proposed state laws follow or surpass the protections introduced by the CCPA. Preparing to meet each more comprehensive law will require continued data mapping, ongoing evaluation and development of written information security programs, heightened scrutiny of vendor relationships and agreements, risk assessments, and updated employee data protection and security awareness training.

We will continue monitor the progress of this bill.

New Hampshire Sues Massachusetts Over Remote Worker Taxes | Best States | US NewsWhen Massachusetts issued its data security regulations in 2009 (Regulations), it led the way for states on data security. The Regulations became effective 12 years ago, almost to the day, March 1, 2010. The Bay State is now contemplating comprehensive privacy legislation, the Massachusetts Information Privacy and Security Act (MIPSA), similar to what has been enacted in California, Colorado, and Virginia. As we review this legislation, the MIPSA provides an important reminder, even if it is not ultimately enacted.

The MIPSA would provide individuals a private right of action if their personal information is subject to a breach of security under Massachusetts law caused by a failure to implement reasonable cybersecurity controls. Damages could be up to $500 per individual per incident or actual damages, which ever is greater. The CCPA contains a similar provision.

Under the MIPSA, if enacted in its current form and following a similar approach taken in neighboring Connecticut, controllers would be able to avoid punitive damages in such cases provided they:

  • created, maintained, and complied with a written cybersecurity program with administrative, physical, and technical safeguards that conforms to an industry recognized framework and
  • design the program in accordance with the Regulations based on an appropriate scale and scope.

Examples of industry recognized frameworks under MIPSA would include:

  • National Institute of Standards and Technology’s (NIST) special publications 800-171 or 800-53
  • The Center for Internet Security’s “Center for Internet Security Critical Security

The Wall Street Journal reported on Friday that the state legislature’s Joint Committee on Advanced Information Technology passed the MIPSA along with a bipartisan vote, no objections. It now moves to the full legislature.

If you have waited 12 years to develop that perfect written information security program (WISP), this might be the time to apply the finishing touches. If you have opened a new business in or expanded to Massachusetts, or recently began collecting personal information of Massachusetts residents, a WISP is a critical compliance requirement. If the MIPSA is enacted, a WISP could play a significant role in minimizing exposure to your organization should it be sued in connection with a data breach.


Photo from usnews.com