On February 9, the Securities and Exchange Commission (“SEC”) voted to propose rule 206(4)-9 under the Advisers Act and 38a-2 under the Investment Company Act (collectively, “Proposed Rule”). In general, the Proposed Rule would require all advisers and funds to adopt and implement cybersecurity policies and procedures containing several elements. While acknowledging spending on cybersecurity measures in the financial services industry is considerable, the SEC suggested it may nonetheless be inadequate, citing a recent survey finding that 58% of financial firms self-reported “underspending” on cybersecurity measures.

For financial services firms, the stakes are particularly high—it is where the money is.

The Proposed Rule would apply to advisers and funds that are registered or required to be registered with the SEC. Covered advisers include those who provide a variety of services to their clients, such as: financial planning advice, portfolio management, pension consulting, selecting other advisers, publication of periodicals and newsletters, security rating and pricing, market timing, and educational seminars. Compliance will be particularly important for those advisors and funds serving the retirement plan industry and now facing increased cybersecurity scrutiny by plan fiduciaries under the Department of Labor’s cybersecurity guidance.

The SEC recognizes the level of risk can vary advisor to advisor, but also notes that those advisors on the lower end of the risk continuum do not have a pass on compliance. A data breach at an adviser that only offers advice on wealth allocation strategies may not have a significant negative effect on its clients due to the limited personal information it maintains. Compare that to a breach experienced by an adviser performing portfolio management services which holds greater quantities of more sensitive personal information, and may have a degree of control over client assets. But even if personal information is not acquired by hackers, a ransomware event could cause a disruption to the advisor’s services adversely impacting their clients, such as limiting their ability to access funds.

So, there is not a one-size-fits-all approach to addressing cybersecurity risks and the Proposed Rule would allow firms to tailor their cybersecurity policies and procedures to fit the nature and scope of their business and address their individual cybersecurity risks. This is not unlike other cybersecurity frameworks, such as the Security Rule for healthcare providers and plans under the Health Insurance Portability and Accountability Act.

According to the SEC’s Fact Sheet, the Proposed Rule would:

  • Require advisers and funds to adopt and implement written policies and procedures that are reasonably designed to address cybersecurity risks. This requirement includes a comprehensive, documented risk assessment of the adviser’s or fund’s business operations. At least annually, advisors and funds would need to review and evaluate the design and effectiveness of their cybersecurity policies and procedures, which would allow them to update them in the face of ever-changing cyber threats and technologies.
  • Require advisers to report significant cybersecurity incidents to the Commission on proposed Form ADV-C, with similar reporting for funds.
  • Enhance adviser and fund disclosures related to cybersecurity risks and incidents. For instance, the Proposed Rule would amend adviser and fund disclosure requirements to provide current and prospective advisory clients and fund shareholders with improved information regarding cybersecurity risks and cybersecurity incidents.
  • Require advisers and funds to maintain, make, and retain certain cybersecurity-related books and records. This would include records related to compliance with the Proposed Rule and the occurrence of cybersecurity incidents.

The SEC hopes the rules would promote a more comprehensive framework to address cybersecurity risks for advisers and funds, resulting in a reduction in risk and impact of a significant cybersecurity incident. But the SEC also hopes to give clients and investors better information with which to make investment decisions, and itself better information with which to conduct comprehensive monitoring and oversight of ever-evolving cybersecurity risks and incidents affecting advisers and funds.

Facial recognition, voiceprint, and other biometric-related technology are booming, and they continue to infiltrate different facets of everyday life. The technology brings countless potential benefits, as well as significant data privacy and cybersecurity risks.

Whether it is facial recognition technology being used with COVID-19 screening tools and in law enforcement, continued use of fingerprint-based time management systems, or the use of various biometric identifiers such as voiceprint for physical security and access management, applications in the public and private sectors involving biometric identifiers and information continue to grow … so do concerns about the privacy and security of that information and civil liberties. Over the past few years, significant compliance and litigation risks have emerged that factor heavily into the deployment of biometric technologies, particularly facial recognition. This is particularly the case in Illinois under the Biometric Information Privacy Act (BIPA).

Read our Special Report which discusses these concerns and the growing legislating activity. You can also access our Biometric Law Map

In honor of Data Privacy Day, we provide the following “Top 10 for 2022.”  While the list is by no means exhaustive, it does provide some hot topics for organizations to consider in 2022.

  1. State Consumer Privacy Law Developments

On January 1, 2020, the CCPA ushered into the U.S. a range of new rights for consumers, including:

  • The right to request deletion of personal information;
  • The right to request that a business disclose the categories of personal information collection and the categories of third parties to which the information was sold or disclosed; and
  • The right to opt-out of sale of personal information; and
  • The California consumer’s right to bring a private right of action against a business that experiences a data breach affecting their personal information as a result of the business’s failure to implement “reasonable safeguards.”

In November of 2020, California voters passes the California Privacy Rights Act (CPRA) which amends and supplements the CCPA, expanding compliance obligations for companies and consumer rights. Of particular note, the CPRA extends the employment-related personal information carve-out until January 1, 2023. The CPRA also introduces consumer rights relating to certain sensitive personal information, imposes an affirmative obligation on businesses to implement reasonable safeguards to protect certain consumer personal information, and prevents businesses from retaliating against employees for exercising their rights.  The CPRA’s operative date is January 1, 2023 and draft implementation regulations are expected by July 1, 2022. Businesses should monitor CCPA/CPRA developments and ensure their privacy programs and procedures remain aligned with current CCPA compliance requirements. For practical guidance on navigating compliance, check out our newly updated CCPA/CPRA FAQS.

In addition to California developments, in 2021, Virginia and Colorado also passed consumer privacy laws similar in kind to the CCPA, both effective January 1, 2023 (together with the CPRA). While the three state laws share common principles, including consumer rights of deletion, access, correction and data portability for personal data, they also contain key nuances, which pose challenges for broad compliance.  Moreover at least 26 states have considered or are considering similar consumer privacy laws, which will only further complicate the growing patchwork of state compliance requirements.

In 2022, businesses are strongly urged to prioritize their understanding of what state consumer privacy obligations they may have, and strategize for implementing policies and procedures to comply.

  1. Biometric Technology Related Litigation and Legislation

There was a continued influx of biometric privacy class action litigation in 2021 and this will likely continue in 2022. In early 2019, the Illinois Supreme Court handed down a significant decision concerning the ability of individuals to bring suit under the Illinois’s Biometric Information Privacy Act (BIPA). In short, individuals need not allege actual injury or adverse effect beyond a violation of his/her rights under BIPA to qualify as an aggrieved person and be entitled to seek liquidated damages, attorneys’ fees and costs and injunctive relief under the Act.

Consequently, simply failing to adopt a policy required under BIPA, collecting biometric information without a release or sharing biometric information with a third party without consent could trigger liability under the statute. Potential damages are substantial as BIPA provides for statutory damages of $1,000 per negligent violation or $5,000 per intentional or reckless violation of the Act. There continues to be a flood of BIPA litigation, primarily against employers with biometric timekeeping/access systems that have failed to adequately notify and obtain written releases from their employees for such practices.

Biometric class action litigation has also been impacted by COVID-19. Screening programs in the workplace may involve the collection of biometric data, whether by a thermal scanner, facial recognition scanner or other similar technology. In late 2020, plaintiffs’ lawyers filed a class action lawsuit on behalf of employees concerning their employer’s COVID-19 screening program, which is alleged to have violated the BIPA. According to the complaint, employees were required to undergo facial geometry scans and temperature scans before entering company warehouses, without prior consent from employees as required by law.  This case is still alive and well, at the start of 2022, after significant attempts by the defense, a federal district judge in Illinois declined to dismiss the proposed class action, as the allegations relating to violations regarding “possession” and “collection” of biometric data pass muster at this stage.  Many businesses have been sued under the BIPA for similar COVID related claims in the past year, and 2022 will likely see continued class action litigation in this space.

In 2021, biometric technology-related laws began to evolve at a rapid pace, signaling a continued trend into 2022.  In July 2021, New York City established BIPA-like requirements for retail and hospitality businesses that collect and use “biometric identifier information” from customers.  In September 2021, the City of Baltimore officially banned private use of facial recognition technology. Baltimore’s local ordinance prohibiting persons (including residents, businesses, and most of the city government) from “obtaining, retaining, accessing, or using certain face surveillance technology or any information obtained from certain face surveillance technology”.  Other localities have also established prohibitions on use of biometric technology including Portland (Oregon), San Francisco. State legislatures have also increased focus on biometric technology regulation. In addition to Illinois’s BIPA, Washington and Texas have similar laws, and states including Arizona, Florida, Idaho, Massachusetts and New York have also proposed such legislation. The proposed biometric law in New York state would mirror Illinois’ BIPA, including its private right of action provision. In California, the CCPA also broadly defines biometric information as one of the categories of personal information protected by the law.

Additionally, states are increasingly amending their breach notification laws to add biometric information to the categories of personal information that require notification, including 2021 amendment in Connecticut and 2020 amendments in California, D.C., and Vermont. Similar proposals across the U.S. are likely in 2022.

In response to the constantly evolving legislation related to biometric technology, we have created an interactive biometric law state map to help businesses that want to deploy these technologies, which inevitably require the collection, storage, and/or disclosure of biometric information, track their privacy and security compliance obligations.

  1. Ransomware Attacks

Ransomware attacks continued to make headlines in 2021 impacting large organizations, including Colonial Pipeline, Steamship Authority of Massachusetts, the NBA, JBS Foods, the D.C. Metropolitan Police Department and many more. Ransomware attacks are nothing new, but they are increasing in severity. There has been an increase in frequency of attacks and higher ransomware payments, in large part due to increased remote work and the associated security challenges.  The healthcare industry in particular has been substantially impacted by the onset of the COVID-19 pandemic  – a recent study by Comparitech found that ransomware attacks on the healthcare industry has resulted in a financial loss of over $20 billion in impacted revenue, litigation and ransomware payments and growing.

In fact, the FBI jointly with the Cybersecurity and Infrastructure Security Agency (CISA) went so far as to issue a warning to be on high alert for ransomware attacks for holidays in light of numerous targeted attacks over other holidays earlier in the year.

Moreover in 2021, the National Institute of Standards Technology (NIST)  released a preliminary draft of its Cybersecurity Framework Profile for Ransomware Risk Management. The NIST framework provides steps for protecting against ransomware attacks, recovering from ransomware attacks, and determining you organization’s state of readiness to prevent and mitigate ransomware attacks.

Ransomware continues to present a significant threat to organizations as we move into 2022. Organizations may not be able to prevent all attacks, but it is important to remain vigilant and be aware of emerging trends.

Here are some helpful resources for ransomware attack prevention and response:

  1. Biden Administration Prioritizes Cybersecurity

In large part due to significant threat of ransomware attacks discussed above, the Biden Administration has made clear that cybersecurity protections are a priority. In May of 2021, on the heels of the Colonial Pipeline ransomware attack that snarled the flow of gas on the east coast for days, the Biden Administration issued an Executive Order on “Improving the Nation’s Cybersecurity” (EO). The EO was in the works prior to the Colonial Pipeline cyberattack, however was certainly prioritized as a result. The EO made a clear statement on the policy of the Administration, “It is the policy of my Administration that the prevention, detection, assessment, and remediation of cyber incidents is a top priority and essential to national and economic security.  The Federal Government must lead by example.  All Federal Information Systems should meet or exceed the standards and requirements for cybersecurity set forth in and issued pursuant to this order.” This EO will mostly impacts the federal government and its agencies. However, several of the requirements in the EO will reach certain federal contractors, and also will influence the private sector.

Shortly after the Biden Administration issued the EO, it followed in August 2021 with the issuance of a National Security Memo (NSM) with the intent of improving cybersecurity for critical infrastructure systems. This NSM established an Industrial Control Systems Cybersecurity Initiative (the “Initiative”) that will be a voluntary, collaborative effort between the federal government and members of the critical infrastructure community aimed at improving voluntary cybersecurity standards for companies that provide critical services.

The primary objective of the Initiative is to encourage, develop, and enable deployment of a baseline of security practices, technologies and systems that can provide threat visibility, indications, detection, and warnings that facilitate response capabilities in the event of a cybersecurity threat.  According to the President’s Memo, “we cannot address threats we cannot see.”

And most recently, in early January 2022, President Biden issued an additional NSM to improve the cybersecurity of National Security, Department of Defense, and Intelligence Community Systems.  “Cybersecurity is a national security and economic security imperative for the Biden Administration, and we are prioritizing and elevating cybersecurity like never before…Modernizing our cybersecurity defenses and protecting all federal networks is a priority for the Biden Administration, and this National Security Memorandum raises the bar for the cybersecurity of our most sensitive systems,” stated the White House in its issuance of the latest NSM.

The U.S. government will continue to ramp up efforts to strengthen its cybersecurity as we head into 2022, impacting both the public and private sector. Businesses across all sectors should be evaluating their data privacy and security threats and vulnerabilities and adopt measures to address their risk and improve compliance.

  1. COVID-19 privacy and security considerations

During 2020 and 2021, COVID-19 presented organizations large and small with new and unique data privacy and security considerations. And while we had high hopes that increased vaccination rates would put this pandemic in the rearview mirror, the latest omicron strand showed us otherwise. Most organizations, particularly in their capacity as employers, needed to adopt COVID-19 screening and testing measures resulting in the collection of medical and other personal information from employees and others. While the Supreme Court has stayed OSHA’s ETS mandating that employers with 100+ employees require COVID-199 vaccination and the Biden Administration ultimately withdrew the same, some localities have instituted mandates depending on industry, and many employers have voluntarily decided to institute vaccine requirements for employees.  Ongoing vigilance will be needed to maintain the confidential and secure collection, storage, disclosure, and transmission of medical and COVID-19 related data that may now include tracking data related to vaccinations or the side effects of vaccines.

Several laws apply to data the organizations may collect in this instance. In the case of employees, for example, the Americans with Disability Act (ADA) requires maintaining the confidentiality of employee medical information and this may include COVID-19 related data. Several state laws also have safeguard requirements and other protections for such data that organization should be aware of when they or others on their behalf process that information.

Many employees will continue to telework during 2022 (and beyond). A remote workforce creates increased risks and vulnerabilities for employers in the form of sophisticated phishing email attacks or threat actors gaining unauthorized access through unsecured remote access tools. It also presents privacy challenges for organizations trying to balance business needs and productivity with expectations of privacy. These risks and vulnerabilities can be addressed and remediated through periodic risk assessments, robust remote work and bring your own device policies, and routine monitoring.

As organizations continue to work to create safe environments for the in-person return of workers, customers, students, patients and visitors, they may rely on various technologies such as wearables, apps, devices, kiosks, and AI designed to support these efforts. These technologies must be reviewed for potential privacy and security issues and implemented in a manner that minimizes legal risk.

Some reminders and best practices when collecting and processing information referred to above and rolling out these technologies include:

  • Complying with applicable data protection laws when data is collected, shared, secured and stored including the ADA, Genetic Information Nondiscrimination Act, CCPA, GDPR and various state laws. This includes providing required notice at collection under the California Consumer Privacy Act (CCPA), or required notice and a documented lawful basis for processing under the GDPR, if applicable.
  • Complying with contractual agreements regarding data collection; and
  • Contractually ensuring vendors who have has access to or collect data on behalf of the organization implement appropriate measures to safeguard the privacy and security of that data.
  1. “New” EU Standard Contractual Clauses

In July of 2020 the Court of Justice of the European Union (CJUE) published its decision in Schrems II which declared the EU-US Privacy Shield invalid for cross border data transfers and affirmed the validity standard contractual clauses (“SCCs) as an adequate mechanism for transferring person data from the EEA, subject to heightened scrutiny.  However, the original SCCs were unable to adequately address the EU Commission’s concerns about the protection of personal data.

On June 4, 2021, the EU Commission adopted “new” modernized SCCs to replace the 2001, 2004, and 2010 versions in use up to that point – effective since September 27,2021. The EU Commission updated the SCCs to address more complex processing activities, the requirements of the GDPR, and the Schrems II decision. These clauses are modular so they can be tailored to the type of transfer.  if a data exporter transfers data from the EU to a U.S. organization, the U.S. organization must execute the new SCCs unless the parties rely on an alternate transfer mechanism or an exception exists. This applies regardless of whether the U.S. company receives or accesses the data as a data controller or processor. The original SCCs apply to controller-controller and controller-processor transfers of personal data from the EU to countries without a Commission adequacy decision. The updated clauses are expanded to also include processor-processor and processor-controller transfers. While the existing SCCs were designed for two parties, the new clauses can be executed by multiple parties. The clauses also include a “docking clause” so that new parties can be added to the SCCs throughout the life of the contract.

The obligations of the data importer are numerous and include, without limitation:

  • documenting the processing activities it performs on the transferred data,
  • notifying the data exporter if it is unable to comply with the SCCs,
  • returning or securely destroying the transferred data at the end of the contract,
  • applying additional safeguards to “sensitive data,”
  • adhering to purpose limitation, accuracy, minimization, retention, and destruction requirements,
  • notifying the exporter and data subject if it receives a legally binding request from a public authority to access the transferred data, if permitted, and
  • challenging a public authority access request if it reasonably believes the request is unlawful.

The SCCs require the data exporter to warrant there is no reason to believe local laws will prevent the importer from complying with its obligations under the SCCs. In order to make this representation, both parties must conduct and document a risk assessment of the proposed transfer.

If an organization that transfers data cross border has not already done so it should be implementing the new procedures and documents for the SCCs. This is, of course, if they are not relying on an alternate transfer mechanism or an exception exists. Organizations will also need to review any ongoing transfers made in reliance on the old SCCs and take steps to comply. As with new transfers, this will require a documented risk assessment and a comprehensive understanding of the organization’s process for accessing and transferring personal data protected under GDPR. For additional guidance on the new EU SCCs, our comprehensive FAQs are available here.

  1. TCPA

In April 2021, the U.S. Supreme Court issued a monumental decision with significant impact on the future of Telephone Consumer Protection Act (TCPA) class action litigation. The court narrowly ruled to qualify as an “automatic telephone dialing system”, a device must be able to either “store a telephone number using a random or sequential generator or to produce a telephone number using a random or sequential number generator”.  The underlying decision of the Ninth Circuit was reversed and remanding.

The Supreme Court unanimously concluded, in a decision written by Justice Sotomayor, that to qualify as an “automatic telephone dialing system” under the TCPA, a device must have the capacity either to store, or to produce, a telephone number using a random or sequential number generator.

“Expanding the definition of an autodialer to encompass any equipment that merely stores and dials telephone numbers would take a chainsaw to these nuanced problems when Congress meant to use a scalpel,” Justice Sotomayor pointed out in rejecting the Ninth Circuit’s broad interpretation of the law.

Moreover, Sotomayor noted that, “[t]he statutory context confirms that the autodialer definition excludes equipment that does not “us[e] a random or sequential number generator.””  The TCPA’s restrictions on the use of autodialers include, using an autodialer to call certain “emergency telephone lines” and lines “for which the called party is charged for the call”. The TCPA also prohibits the use of an autodialer “in such a way that two or more telephone lines of a multiline business are engaged simultaneously.” The Court narrowly concluded that “these prohibitions target a unique type of telemarketing equipment that risks dialing emergency lines randomly or tying up all the sequentially numbered lines at a single entity.”

The Supreme Court’s decision resolved a growing circuit split, where several circuits had previously interpreted the definition of an ATDS broadly  to encompass any equipment that merely stores and dials telephone numbers, while other circuits provided a narrower interpretation, in line with the Supreme Court’s ruling. It was expected the Supreme Court’s decision would help resolve the ATDS circuit split and provide greater clarity and certainty for parties facing TCPA litigation. In the six months following the Supreme Court’s decision, the Institute of Legal Reform documented a 31% drop in TCPA filings, compared to the six months prior to the ruling.  Nonetheless, many claims based on broad ATDS definitions are still surviving early stages of litigation in the lower courts, and some states have enacting (or are considering) “mini-TCPAs” which include a broader definition of ATDS. While the Supreme Court’s decision was considered a win for defendants facing TCPA litigation, organizations are advised to review and update their telemarketing and/or automatic dialing practices to ensure TCPA compliance, as they move into 2022.

  1. Global Landscape of Data Privacy & Security

2021 was a significant year for the global landscape of data privacy and security.  As discussed above, on June 4th, the European Commission adopted new standard contractual clauses for the transfer of personal data from the EU to “third countries”, including the U.S. On August 20, China passed its first comprehensive privacy law, the Personal Information Protection Law (PIPL), similar in kind to the EU’s GDPR.  The law took effect in November of 2021.  In addition, China published 1) Security Protection Regulations on the Critical Information Infrastructure and 2) the Data Security Law which aim to regulate data activities, implement effective data safeguards, protect individual and entity legitimate rights and interests, and ensure state security – both effective September of 2021.  Finally, Brazil enacted  Lei Geral de Proteção de Dados Pessoais (LGPD), its first comprehensive data protection regulation, again with GDPR-like principles. The LGPD became enforceable in August of 2021.

In 2022, U.S. organizations may face increased data protection obligations as a result of where they have offices, facilities, or employees; whose data they collect; where the data is stored; whether it is received from outside the U.S.; and how it is processed or shared. These factors may trigger country-specific data protection obligations such as notice and consent requirements, vendor contractual obligations, data localization or storage concerns, and safeguarding requirements. Some of these laws may apply to data collection activities in a country regardless of whether the U.S. business is located there.

  1. Federal Consumer Privacy Law

Numerous comprehensive data protection laws were proposed at the federal level in recent years. These laws have generally stalled due to bipartisan debate over federal preemption and a private right of action. And while, every year, we ask ourselves whether this will be the year, 2022 may indeed be the year the U.S. enacts a federal consumer privacy law.  2022 has barely begun and a coalition which includes the U.S. Chamber of Congress together with local business organizations in over 20 states have issued a letter to Congress highlighting the importance of enacting a federal consumer privacy law as soon as possible.

“Data is foundational to America’s economic growth and keeping society safe, healthy and inclusive…Fundamental to the use of data is trust,” the coalition noted. “A national privacy law that is clear and fair to business and empowering to consumers will foster the digital ecosystem necessary for America to compete.”

Moreover, with California, Virginia, and Colorado all with comprehensive consumer privacy laws (as discussed above), and approximately half of U.S. states contemplating similar legislation, there is a growing patchwork of state laws that “threatens innovation and create consumer and business confusion,” as stated in the coalition’s letter to Congress.

Will 2022 be the year the U.S. government enacts a federal consumer privacy law? Only time will tell.  We will continue to update as developments unfold.

  1. Cyber Insurance

Over the past several years, if your organization experienced a cyberattack, such as ransomware or a diversion of funds due to a business email compromise (BEC), and you had cyber insurance, you likely were very thankful. However, if you are renewing that policy (or in the cyber insurance market for the first time), you are probably looking at much steeper rates, higher deductibles, and even co-insurance, compared to just a year or two ago. This is dependent on finding a carrier to provide competitive terms, although there are some steps organizations can take to improve insurability.

Claims paid under cyber insurance policies are significantly up, according to Marc Schein*, CIC, CLCS, National Co-Chair Cyber Center of Excellence for Marsh McLennan Agency who closely tracks cyber insurance trends. Mr. Schein identified the key drivers hardening the cyber insurance market: ransomware and business interruption.

According to Fitch Ratings’ Cyber Report 2020, insurance direct written premiums for the property and casualty industry increased 22% in the past year to over $2.7 billion, representing the demand for cyber coverage. The industry statutory direct loss plus defense and cost containment (DCC) ratio for standalone cyber insurance rose sharply in 2020 to 73% compared with an average of 42% for the previous five years (2015-2019). The average paid loss for a closed standalone cyber claim moved to $358,000 in 2020 from $145,000 in 2019.

The effects of these, other increases in claims, and losses from cyberattacks had a dramatic impact on cyber insurance. Perhaps the most concerning development for organizations in the cyber insurance market is the significantly increased scrutiny carriers are applying to an applicant’s insurability.

There are no silver bullets, but implementing administrative, physical and technical safeguards to protect personal information may dramatically reduce the chances of a cyberattack, and that is music to an underwriter’s ears. As an organization heads into 2022, ensuring such safeguards are instituted and regularly reviewed, can go a long way.

*      *     *     *     *

For these reasons and others, we believe 2022 will be a significant year for privacy and data security.

Happy Privacy Day!

Few want to get past the COVID-19 pandemic more than leaders of federal and state unemployment benefit departments. For the last 2 years they have been successfully targeted for fraud and data breaches, racking up billions in losses. Thousands of employees across the country, including yours truly, have had false claims submitted in their name.

Why is this happening? It appears to be a combination of factors, most leading back to one driving force – COVID-19. Congress’ passing rich unemployment compensation benefits to offset the economic carnage stemming from the pandemic created a significant incentive for criminal hackers, specifically the Pandemic Unemployment Assistance (PUA) program. During the same time, the numbers of workers in state unemployment offices went down due to layoffs, while the number of applications for unemployment benefits skyrocketed. Couple that with an expansion of benefits to workers without traditional pay stubs (e.g., gig workers) making verification harder, and data security gaps and challenges regularly facing state agencies and organizations generally, and there is a perfect storm for fraud and data breaches to proliferate.

Here’s a rundown of just some of the losses reported by Yahoo!news:

  • Oregon – $24 million in 2020
  • Washington – $646 million in 2020
  • California – $20 billion, since the start of the pandemic through October 2021
  • Federal – $87.3 billion since the start of the pandemic through September 30, 2021, per the DOL (relying on a historical improper payment rate of 10%).

What are some of the effects? There is, of course, a significant loss of taxpayer dollars, not to mention all the time spent trying to resolve the fraud, getting the much-needed benefits to those whose benefits were delayed due to the fraud, and implementing stronger controls.

With so many employees learning of and reporting false unemployment claims being submitted in their name, employers across the country have had to jump into to help. Frequently, many employees at a single company reported fraud at the same time, making it seem as if the company was the victim of a breach. While it is always important to appropriately investigate suspected data incidents, a compromise to the employer’s systems generally was not the reason for the employees’ reports in these cases.

Is it coming to an end? Maybe not. On Friday, Pennsylvania’s Department of Labor and Industry (L&I) reported it is investigating “sophisticated attacks” on its systems. According to reports,

unemployment recipients stopped receiving their checks, and that L&I telephone agents told they were among numerous Pennsylvanians whose direct-deposit banking information had been changed

What can affected organizations and individuals do? Affected federal and state agencies have been and continue to be taking steps to minimize these attacks and the resulting fraud. One of those steps is to deploy facial recognition technologies to more strongly verify the the identities of claimants. By late summer, more than half of the states in the U.S. have contracted with ID.me to provide ID verification services. For private sector organizations, the deployment of such technologies to verify identities of customers and employees faces a growing web of regulation.  Other efforts to curb this kind of activity includes steps all organizations might consider, like enabling multi-factor authentication (MFA). This is something the PA L&I wished it did. Hopefully, pandemics are not regular occurrences. But planning for business interruption is critical.

For organizations and their employees affected by unemployment fraud, it is important to quickly report incidents and follow recommended steps by the applicable agency. Below are just a few of the online resources that may be helpful.

The California Consumer Privacy Act (CCPA), considered one of the most expansive U.S. privacy laws to date, went into effect on January 1, 2020. The CCPA placed significant limitations on the collection and sale of a consumer’s personal information and provides consumers new and expansive rights with respect to their personal information.

Less than one year later, on November 3, 2020, a majority of California residents voted in favor of Proposition 24, which included the California Privacy Rights Act (CPRA). The CPRA builds upon the CCPA’s extensive framework of privacy rights and obligations, both expanding and modifying key aspects of the CCPA, and generally becomes effective January 1, 2023.

Click here to read our CCPA/CPRA FAQs

We substantially updated our prior CCPA FAQs to cover many of the CPRA changes. Our hope is they help businesses learn more about the obligations they may have and strategies for implementing policies and procedures to comply.

Efforts to secure systems and data from a cyberattack often focus on measures such as multifactor authentication (MFA), endpoint monitoring solutions, antivirus protections, and role-based access management controls, and for good reason. But there is a basic principle of data protection that when applied across an organization can significantly reduce the impact of a data incident – the minimum necessary principle. A data breach reported late last year by the Rhode Island Public Transit Authority (RIPTA) highlights the importance of this relatively simple but effective tool.

In December 2021, RIPTA sent notification of a data breach to several thousand individuals who were not RIPTA employees. Reports of the incident prompted inquiries from a state Senator in Rhode Island, Louis P. DiPalma, and union officials who represented the affected individuals. According to Rhode Island’s Department of Administration (DOA), a forensic analysis conducted in connection with the incident indicates the affected files included health plan billing records pertaining to State of Rhode Island employees, not RIPTA employees. The DOA goes on to state that:

[s]tate employee data was incorrectly shared with RIPTA by an external third party who had responsibility for administering the state’s health plan billing.

An investigation is underway to confirm exactly what happened. The content of recent conversations between state officials and union representatives reported in the press indicate that an RIPTA payroll clerk received a file containing state employee health plan data in August 2020, stored it on the employee’s hard drive, where it remained until August 2021, when the cyberattack on RIPTA occurred. It is unclear why the employee received the information, from whom, or whether it was appropriate to maintain it.

Regardless, the “minimum necessary” principle, simply stated, requires that organizations take reasonable steps so that confidential and personal information are only accessed, used, maintained, or disclosed to carry out the applicable business functions. Consider, for example, that retention policies are becoming increasingly important from a compliance perspective, such as with regard to the California Privacy Rights Act of 2020 (CPRA), which amends and supplements the California Consumer Privacy Act (CCPA), the EU General Data Protection Regulation (GDPR), and the Illinois Biometric Information Privacy Act (BIPA).  This principle can be applied at multiple points in the operations of the organization, including without limitation:

  • When requesting information. Think about what elements of information the organization collects from customers, students, patients, vendors, employees, and others. Is it more information than is needed to carry out the purpose(s) for the collection? Can portals, forms, etc. be modified to limit the information collected?
  • When receiving information. Employees cannot always control the information they receive from parties outside the organization. But when they do, what steps or guidelines are in place to determine what is needed and what is not needed? For information that is not needed, what is the process for alerting the sender, if necessary, returning the data, and/or removing it from the systems?
  • When using information. Employees carry out many critical business functions that require the use of confidential and personal information. Do they always need all of it? Are there instances where less information can be sufficient for the processing of an important business function.
  • When storing information. The task at hand has been completed and the question becomes what information should be retained. The answer can be a complex web of legally mandated retention requirements, contractual obligations, business needs, and other considerations. But organizations should carefully analyze these issues an establish protocols for employees to follow. Note that under the CPRA, a covered business may not retain a consumer’s personal information for longer than is reasonably necessary for the stated purpose it was collected.
  • When responding to requests or disclosing information. Whether engaging in billing and collection activities, responding to an attorney demand letter, reporting information to the government, administering benefit plans for employees, or any number of other typical business functions, organizations make disclosures of confidential and personal information. Important questions to ask are (i) what data does the requesting party really need, (ii) what classifications of information are actually in the file being disclosed and are there limitations on the disclosure of that information, and (iii) whether the response or disclosure can have the same effect with less data.

In thinking about these questions, there may not be a clear right or wrong answer to whether the information should or should not have been collected, used, stored, or disclosed. However, from a risk management perspective, it is helpful to review business procedures, practices, operations, forms, etc. for ways to minimize exposure to confidential and personal information. Applying the minimum necessary principle can be an effective way of minimizing the organization’s data footprint so that should it experience a security incident, there is the possibility for less data to be compromised.

The use of smart dashcams and vehicle cameras, including those leveraging AI technology, may trigger the next wave of BIPA litigation, according to two cases filed in Illinois this week.

Enacted in 2008, the Illinois Biometric Information Privacy Act, 740 ILCS 14 et seq. (the “BIPA”), went largely unnoticed until a few years ago when a handful of cases sparked a flood of putative class action litigation over the collection, use, storage, and disclosure of biometric information. Many of these cases were filed by plaintiffs who alleged BIPA violations when time management devices called for them to swipe their finger to clock in or out of work. Use of those devices, many plaintiffs claim, resulted in the collection of their fingerprints without the corresponding notice, consent, and other measures required under the BIPA. The focus may be shifting to a new technology: AI-powered dashcams.

Organizations whose employees drive regularly to perform job functions raise several issues – safety, productivity, loss prevention, expense reimbursement, among others. For these reasons, some organizations deploy telematics and related technologies to better manage their fleets. A tool in this process is the vehicle camera, such as dashcams, that are capable of monitoring (and recording) video and/or audio of the driver, passengers, and in some cases persons outside the vehicle. These devices also can track location and how a vehicle is being driven – hard acceleration, sharp turns, lane changing, etc. But, it is the use of AI and machine learning technologies that is raising questions about whether biometric identifiers and/or information are being collected.

According to at least one of these recently filed complaints, the vehicle camera does not just take a traditional video recording of the driver. It uses AI and machine learning technologies to detect driver behavior. More specifically, product descriptions claim the intelligent cameras can identify if drivers are inattentive, distracted, or tired through facial mapping technology which scans the geometry of the face and analyzes the resulting data.

Under BIPA, a “biometric identifier” generally means “a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry” and “biometric information,” means “any information, regardless of how it is captured, converted, stored, or shared, based on an individual’s biometric identifier used to identify an individual.

It is unclear at this point whether these complaints have any merit, however, organizations that are using AI-powered vehicle cameras should be reviewing that technology carefully with their vendors to understand the nature and extent of the data being collected. For assistance with understanding the legal framework concerning biometric information, please see our Biometric Law Map

What is greenwashing and why is it a problem? | EuronewsWith ransomware and other cyber threats top of mind for most in the c-suite these days, a question frequently raised is whether a particular organization is a target for hackers. Of course, nowadays, any organization is at risk of an attack, but the question is whether some organizations are targeted more than others. A recent Insurance Journal article discusses a paper published in September 2021 that identifies a factor that could elevate the risk of being targeted, a factor many in cyber might not have expected, “greenwashing.”

Around this time of year, many offering commentary on cybersecurity issues (including us!) postulate on what lies ahead for the year, trends to watch, and emerging risks. For example, Embroker Insurance Services published comprehensive report in December 2021, outlining a wealth of cyberattack statistics and trends, including a view on the types of organizations most vulnerable to cyberattacks:

  • Banks and financial institutions
  • Healthcare institutions
  • Corporations
  • Higher education

It is not difficult to see why entities in these industries (and others) are thought of most frequently. They typically have thousands, sometimes millions of customers, many locations, hundreds of employees, and lots and lots of personal information. They maintain increasingly complex information systems amid an ever-expanding regulatory environment, sometimes without commensurate budgetary support.

However, according to the University of Delaware paper cited by the Insurance Journal, an organization’s “corporate social performance” or “CSP” can affect its likelihood of being subject to a cyberattack. Specifically, according to the paper, organizations that have CSP strengths outside of their core business with a less than stellar record in other areas are at increased risk of a data breach.

“The increased likelihood of breach for firms with seemingly disingenuous CSP records suggests that perceived “greenwashing” efforts that attempt to mask poor social performance make firms attractive targets for security exploitation.

An organization’s CSP, as measured by its environmental, social and corporate governance (ESG) rating, is an emerging metric for evaluating organizations, even if its impact on corporate financial performance (CFP) remains unclear.  For example, a proposed rule issued in October 2021 by the Department of Labor would help pave the way for increased consideration of ESG factors by plan fiduciaries when selecting investment options for retirement plan assets.

The greater attention to CSP and ESG shared by many, however, evidently may include a segment of people willing to take more extreme measures to achieve their goals. In 2008, according to reports, fires that severely damaged at least five luxury homes in a Seattle suburb were suspected to have been started by “ecoterrorists,” angry that developers marketed the subdivision as “built green.” This is not unlike the motive identified by the Univ. of Del. paper for launching cyberattacks against certain organizations – that is, stopping organizations from using ESG to appeal to customers without also making meaningful changes to core business practices.

Of course, it is not clear whether the paper has captured what motivates cyberattacks more often than not, or whether in fact organizations engaged in greenwashing are being targeted at a higher rate than others, if they at “targeted” at all. At the same time, it is certainly not unprecedented for individuals to take extreme measures to advance their desires for social, environment, and other changes. Either way, organizations should be considering all potential risks and appropriately weighing them when developing their information security policies, incident response plans, and other safeguards for protecting systems and information.

Photo courtesy of Euronews.com

Over the past several years, if your organization experienced a cyberattack, such as ransomware or a diversion of funds due to a business email compromise (BEC), and you had cyber insurance, you likely were very thankful. However, if you are renewing that policy (or in the cyber insurance market for the first time), you are probably looking at much steeper rates, higher deductibles, and even co-insurance, compared to just a year or two ago. This is dependent on finding a carrier to provide competitive terms, although there are some steps organizations can take to improve insurability.

What’s going on?

The short answer is what one might expect, claims paid under cyber insurance policies are significantly up, according to Marc Schein*, CIC, CLCS, National Co-Chair Cyber Center of Excellence for Marsh McLennan Agency who closely tracks cyber insurance trends. Mr. Schein identified the key drivers hardening the cyber insurance market: ransomware and business interruption.

  • Ransomware: According to FBI data, adjusted losses from ransomware matters tripled from 2019 to 2020. Further, according to an Allianz Global Corporate & Specialty (AGCS) cyber insights report, cited in Insurance Journal, the U.S. experienced a 62% increase in ransomware incidents during the first six months of 2021 and a 225% increase in ransom demands.
  • Business interruption: Business interruption costs following a ransomware attack more than doubled over the past year, increasing from $761,106 to $1.85 million in 2021, with down time averaging 23 days, according to the same AGCS report.

According to Fitch Ratings’ Cyber Report 2020, insurance direct written premiums for the property and casualty industry Increased 22% last year to over $2.7 billion, representing the demand for cyber coverage. The industry statutory direct loss plus defense and cost containment (DCC) ratio for standalone cyber insurance rose sharply in 2020 to 73% compared with an average of 42% for the previous five years (2015-2019). The average paid loss for a closed standalone cyber claim moved to $358,000 in 2020 from $145,000 in 2019.

The effects of these, other increases in claims, and losses from cyberattacks had a dramatic impact on cyber insurance.

  • Rate increases of 100% to 300% are not uncommon. According to Marsh’s November Cyber Market Report, the average U.S. cyber price per million in coverage increased 174% for the total price per million for the 12 month period ending September 2021.
  • Capacity has decreased dramatically, with $10 million limits becoming challenging to secure.
  • Policy changes, such as increases in deductibles, retention, sublimits, and co-insurance on ransomware payments, are making cyber coverage look more like health insurance.

What can we do?

Perhaps the most concerning development for organizations in the cyber insurance market is the significantly increased scrutiny carriers are applying to an applicant’s insurability. The days of the three-question application process may be over. According to Mr. Schein, before applicants look to procure cyber coverage, an astute buyer should contemplate the following underwriting cyber security controls. Examples of these include:

  • Multi-factor authentication across the applicant’s systems including for email, remote access, vendor access, etc.
  • Adoption of a tested incident response plan.
  • Presence of an endpoint detection solution.
  • Security awareness training, including phishing training.
  • Removing end-of-life software.
  • Closed remote access ports, including remote desktop protocol (RDP).

This is consistent with Mr. Schein’s experience with organizations anxious to bolster information security controls in connection with the underwriting process for cyber insurance. The controls mentioned above are typically best practices underwriters are strongly encouraging which may also improve an organization’s compliance posture. Notably, they are not limited to technical IT fixes, but include broader administrative policies and practices, such as training and breach preparedness.

Indeed, an increasing number of states require businesses to implement “reasonable safeguards“ to protect personal information. In New York, for example, the New York SHIELD Act requires businesses of all sizes to adopt administrative, physical, and technical safeguards to protect the personal information they maintain about New York residents. The statute does not require specific technical safeguards be maintained. The California Privacy Rights Act (CPRA) adds to the California Consumer Privacy Act (CCPA) an affirmative obligation to “implement reasonable security procedures and practices…to protect the personal information from unauthorized or illegal access, destruction, use, modification, or disclosure.” Considering what IT experts have been saying about the effectiveness of multifactor authentication, it has been identified as a meaningful control albeit not full-proof tool to help prevent unauthorized access to information systems within the scope of privacy and security regulation.

Of course, there are no silver bullets, but such safeguards may dramatically reduce the chances of a cyberattack, and that is music to an underwriter’s ears. There will be claims, just fewer of them, and perhaps less damaging.

 

I wish to thank Marc Schein for his tireless commitment to educating on these issues and for his valuable contributions to this article.