The SolarWinds hack highlights the critical need for organizations of all sizes to include cyber supply chain risk management as part of their information security program. It is also a reminder that privacy and security risks to an organization’s data can come from various vectors, including third party vendors and services providers. By way of example, the Pennsylvania Department of Health recently announced a data security incident involving a third-party vendor engaged to provide COVID-19 contact tracing. The personal information of Pennsylvania residents was potentially compromised when the vendor’s employees used an unauthorized collaboration channel.

Protecting against these risks requires maintaining and implementing a third-party vendor management policy, a critical and often overlooked part of an organization’s information security program.  Appropriate vendor management helps guard against threats to an organization’s data posed by authorized third parties who have direct or indirect access. Risks can include data breaches, unauthorized use or disclosure, and corruption or loss of data. These risks may come from vendors who provide cloud storage, SaaS, payroll processing or HR services, services using connected devices, IT services, or even records disposal.

Robust vendor management policies and practices typically involve three components: conducting due diligence to ensure the third party vendor or service provider with whom the organization shares personal information or to whom it discloses or provides access, implements reasonable and appropriate safeguards to ensure the privacy and security of that data; contractually obligating the third party vendor or service provider to implement such safeguards; and monitoring the third party vendor or service provider to ensure compliance with these contracted provisions.

While vendor management is a best practice, it is also required by certain U.S. federal laws including the Gramm-Leach-Bliley Act and HIPAAstate laws in Massachusetts, Illinois and California, and municipal laws such as the New York Department of Financial Services Cybersecurity Rules (NYCRR 500). In the EU, the European Data Protection Regulation (GDPR) specifically requires a data controller to only use processors (e.g., third party service providers) who provide sufficient written guarantees to implement appropriate technical and organizational measures that ensure the privacy and security of the controller’s personal data.

Aside from mandated vendor management practices, over twenty states including Florida, Texas, Massachusetts, New York, Illinois have laws requiring businesses that collect and maintain personal information to implement reasonable safeguards to protect that data. These states have been joined by the recently enacted California Privacy Protection Act (CPRA) and Virginia Consumer Data Protection Act (CDPA).  Although the majority of these statutes do not define reasonable safeguards, similar to data retention and storage limitations practices, vendor management practices may constitute a “reasonable safeguard.”

The Federal Trade Commission (FTC) took such a position in a Consent Agreement resolving alleged violations of the Gramm-Leach-Bliley Act (GLBA) Safeguards Rule. In its complaint, the FTC alleged several violations including a failure to take reasonable steps to select service providers capable of maintaining appropriate safeguards for personal information provided by the company and a failure to require service providers by contract to implement appropriate safeguards for such personal information. The Consent Agreement required the company to establish, implement, and maintain a comprehensive data security program that protects the security of certain covered information (i.e., reasonable safeguards). This requirement specifically includes selecting and retaining vendors capable of safeguarding company personal information they access through or receive from the company, and contractually requiring vendors to implement and maintain safeguards for such information.

Over recent months, companies have faced heightened risks to their information security from threat actors, increased remote work arrangements, and outsourced activities involving sensitive data. These threats, combined with a proliferation of proposed and enacted data protection laws, underscore the importance of implementing, maintaining, and monitoring a robust vendor management program.

 

 

 

The California Privacy Protection Act (CPRA) amended the California Consumer Privacy Act (CCPA) and has an operative date of January 1, 2023. The CPRA introduces new compliance obligations including a requirement that businesses conduct risk assessments. While many U.S. companies currently conduct risk assessments for compliance with state “reasonable safeguards” statutes (e.g., Florida, Texas, Illinois, Massachusetts, New York) or the HIPAA Security Rule, the CPRA risk assessment has a different focus. This risk assessment requirement is similar to the EU General Data Protection’s (GDPR) data protection impact assessment (DPIA).

The goal of conducting a CPRA risk assessment is to restrict or prohibit the processing of personal information where the risks to a consumer’s privacy outweigh any benefits to the consumer, business, stakeholders, and public. Notably, the CPRA does not limit risk assessments to activities involving the processing of sensitive data. In addition to conducting the actual risk assessment, this process will require a preliminary determination of which data processing activities may present a significant risk to privacy rights. The business must document these risk assessments for submission to the California Privacy Protection Agency on a regular basis.

Under the CPRA, the documented risk assessment shall:

  • include whether the processing involves consumers’ sensitive personal information (e.g., social security, driver’s license, state identification card, or passport number; account log-in, financial account, debit card, or credit card number in combination with security or access code, password, or credentials for account; precise geolocation; racial or ethnic origin, religious or philosophical beliefs, or union membership; contents of mail, email, and text messages unless the business is the intended recipient of the communication; genetic data; biometric information processed for the purpose of uniquely identifying a consumer; information related to health, sex life or orientation); and
  • identify and weigh the benefits to the business, consumer, other stakeholders, and the public from the processing against the potential risks to the rights of the consumer whose data is being processed.

The CPRA directs the California Attorney General and California Privacy Protection Agency to issue implementing regulations, including regulations related to risk assessments. These regulations must be adopted by July 1, 2022 and will likely provide further guidance on the scope of and process for conducting and documenting risk assessments.

Complying with the CPRA will require expanded data mapping and advance planning, some of which may occur prior to issuance of the implementing regulations. During this time, businesses may find the GDPR instructive, particularly since the CCPA and CPRA borrow liberally from the regulation.

Under the GDPR and related guidelines, a DPIA is required or recommended where data processing is likely to result in a high risk to the privacy rights of individuals. This includes activities that

  • use automated processing, including profiling, to evaluate an individual’s personal aspects and on which decisions are based that produce significant effects
  • include large scale processing of sensitive data
  • process data on a large scale
  • match or combine datasets
  • process data of vulnerable individuals (e.g., children)
  • innovate or use new technologies

The DPIA must document and include

  • a description of the processing operations
  • the purposes of the processing
  • the legitimate interest pursued by the business, where applicable
  • an assessment of the necessity and proportionality of the processing activity in relation to the purposes
  • an assessment of the risks to the individual’s privacy rights
  • measures designed to address the risks, including safeguards, security measures and mechanisms to ensure the protection of personal data

The CCPA and CPRA currently exclude employee personal information from certain provisions (e.g., the right to opt out, right to delete). This carve-out exempts employee personal information from the risk assessment requirement outlined above; however, the carve-out is due to expire on January 1, 2023. As businesses begin developing their risk assessment programs, they will want to monitor whether this exclusion for employee information will be extended and/or amended and how it might impact the risk assessment process.

As noted above, the operative date of the CPRA is January 1, 2023. Implementing regulations must be adopted by July 1, 2022 and civil and administrative enforcement activity can commence on July 1, 2023.

For additional information on the CPRA, please reach out to a member of our Privacy, Data and Cybersecurity practice group or check out our CPRA blog series.

In a recent post, we highlighted the need for a privacy and cybersecurity training program, one not solely focused on spotting phishing attempts (although that is quite important as well). A primary reason, quite simply, is that employees continue to be a leading cause of data breaches. This fact was reaffirmed for the Wyoming Department of Health (WDOH) when an employee mistake resulted in the disclosure of nearly 165,000 Wyomingites. And, the risk is only amplified in the current remote work environment.

The WDOH announced on April 27, 2021, that it had inadvertently exposed 53 files containing COVID-19 and Influenza test data and 1 file containing breath alcohol test results. Some of the files had been exposed as early as November 5, 2020, but WDOH did not discover the incident until March 10, 2021. According to WDOH, the files included name or patient ID, address, date of birth, test result(s), and date(s) of service, but did not contain social security numbers, banking, financial, or health insurance information.

The breach resulted from an “inadvertent exposure” of the files by a WDOH workforce employee who mistakenly and impermissibly uploaded the files to private and public GitHub.com repositories, resulting in disclosure to unauthorized individuals. Notably, WDOH intended GitHub.com, internet-based software development company, be used by its employees only for software code storage and maintenance.

It is not clear why the WDOH employee uploaded 54 files containing patient test result data, including COVID-19 test results, to a service intended for storage of coding data. And, we do not know whether the employee in this case received training on the purpose and use of GitHub.com. However, according to WDOH’s announcement, the files were promptly removed from GitHub.com, the employee was sanctioned, and WDOH retrained its workforce on data privacy and security best practices.

Certainly, mistakes processing personal information are going to happen and no amount of training will prevent all data incidents and breaches. There is no silver bullet. An important question for an organization to ask, however, is whether reasonable steps are being taken to minimize the risk to data, even with regard to inadvertent errors in handling and with regard to use of company systems, among other things.

Training can be one of a number of tools organizations use to create a culture of privacy and security. Increased awareness can help to minimize, even if not eliminate, inadvertent errors. The white paper we provided in our earlier post outlines several considerations for developing a robust program designed to continually remind employee of the vigilance needed to protect personal information from unauthorized access, acquisition, modification, and disclosure. It is and will continue to be an ongoing challenge, particularly in the current environment with workplaces shifting as we emerge from the harshest effects of the pandemic.

Will Florida be the next state to enact a comprehensive consumer privacy law? It sure is starting to look like a viable possibility.  With the California Consumer Privacy Act (“CCPA”) in full effect, and the recent enactment of Virginia’s Consumer Data Protection Act (“CDPA”), there has been a flurry of state privacy legislative proposals since the start of 2021, with Florida leading the way.  Backed by Governor Ron DeSantis,   Florida House Bill 969 (HB 969) would create new obligations for covered businesses and greatly expand consumers’ rights concerning their personal information, such as a right to notice about a business’s data collection and selling practices.

Florida’s HB 969 was originally introduced in February (a full overview of the initial bill is available here), and has continued to move swiftly through the legislative process. On April 21, the a slightly revised version of the bill passed the Florida House of Representatives by a 118 – 1 vote, expanding the scope of the private cause of action, changing the effective date and modifying the scope of companies subject to the law.

Here are the key changes made to HB 969 since originally introduced:

Significantly, and similar to the California Consumer Privacy Act (CCPA), HB 969 would establish a private cause of action for consumers affected by a data breach involving certain personal information when reasonable safeguards were not in place to protect that information. More expansive than the CCPA, however, a private cause of action would now also be available to consumers for a company’s failure to comply with deletion, opt-out and correction requests.  Conversely, Virginia’s CDPA lacks a private cause of action in its entirety, and the state’s attorney general has exclusive enforcement authority.

Second, if passed, HB 969 would go into effect on July 1, 2022 – instead of the originally proposed January 1, 2022.  And finally, initially, HB 969 stated that the law would apply to for profit businesses that conduct business in Florida, collect personal information about consumers, and satisfy at least one of the following threshold requirements:

  1. The business has global annual gross revenues over $25 million (adjusted to reflect any increase in the consumer price index); or
  2. The business annually buys, receives for the business’s commercial purposes, sells, or shares for commercial purposes the personal information of at least 50,000 consumers, households, or devices; or
  3. The business derives at least half of its global annual revenues from selling or sharing personal information about consumers.

Instead, HB 969 now stipulates that the law would only apply to for profit businesses that satisfy at least two the above threshold requirements.  In addition, the revised bill increased the annual gross revenues threshold from over $25 million to over $50 million.

Florida seems to be leading the way as the next state  poised to enact a consumer privacy law, but it is not alone.  The International Association of Privacy Professionals (IAPP) has observed, “State-Level momentum for comprehensive privacy bills is at an all-time high.” The IAPP maintains a map of state consumer privacy legislative activity, with in-depth analysis comparing key provisions. There are currently at least 14 states with consumer privacy bills undergoing the legislative process, and several other states where bills were introduced but died in committee or were postponed.  One key state to keep an eye on is Washington. For three consecutive years, the Washington state legislature has introduced versions of the WPA. In 2019, the bill failed in the Assembly. In 2020, the Assembly passed an amended version of the bill, but the two chambers failed to reach a compromise regarding enforcement provisions. Currently in cross committee, the WPA would impose GDPR-like requirements on businesses that collect personal information related to Washington residents. In addition to requirements for notice and consumer rights such as access, deletion, and rectification, the WPA would impose restrictions on use of automatic profiling and facial recognition.

States across the country are contemplating ways to enhance their data privacy and security protections. Organizations, regardless of their location, should be assessing and reviewing their data collection activities, building robust data protection programs, and investing in written information security programs.

Increased remote work due to the COVID-19 pandemic has only exacerbated privacy and cybersecurity concerns, and likely has not changed the finding in Experian’s 2015 Second Annual Data Breach Industry Forecast:

Employees and negligence are the leading cause of security incidents but remain the least reported issue.

A more recent state of the industry report by Shred-It, an information security company, found that 47 percent of business leaders said employee error such as accidental loss of a device or document by an employee had caused a data breach at their organization. Moreover, CybSafe, a behavioral security platform, analyzed data from the UK Information Commissioner’s Office (ICO) in 2020 concluded that human error was the cause of approximately 90 percent of data breaches in prior year. This was up from 61% and 87% in 2018 and 2019. The annual half hour phishing training is important, but it may not be sufficient.

No business wants to send letters to individuals – employees or customers – informing them about a data breach. Businesses also do not want to have their proprietary and confidential business information, or that of their clients or customers, compromised. Unfortunately, no “silver bullet” exists to prevent important data from being accessed, used, disclosed or otherwise handled inappropriately – not even encryption. Companies must simply manage this risk though reasonable and appropriate safeguards. Because employees are a significant source of risk, steps must be taken to manage that risk, and one of those steps is training.

Check out our white paper on employee privacy and cybersecurity training programs.

It is a mistake to believe that only businesses in certain industries like healthcare, financial services, retail, education and other heavily regulated sectors have obligations to train employees about data security. Recent Department of Labor guidance concerning cybersecurity best practices for retirement plans includes a recommendation to: “Conduct periodic cybersecurity awareness training.” Indeed, a growing body of law coupled with the vast amounts of data most businesses maintain should prompt all businesses to assess their data privacy and security risks, and implement appropriate awareness and training programs.

Data privacy and security training can take many forms. Here are some questions to ask when setting up your own program, which are briefly discussed in the whitepaper at the link above:

  • Who should design and implement the program?
  • Who should be trained?
  • Who should conduct the training?
  • What should the training cover?
  • When and how often?
  • How should training be delivered?
  • Should training be documented?

No system is perfect, however, and even a good training program will not prevent data incidents from occurring. But the question you will have to answer for the business is not why didn’t the company have a system in place to prevent all inappropriate uses or disclosures. Instead, the question will be whether the business had safeguards that were compliant and reasonable under the circumstances.

In a recent employee termination case, the Third Circuit Court of Appeals recently upheld the dismissal of race discrimination claims by a bank employee who was terminated due to a social media post.

Plaintiff, a Caucasian woman, was employed as a project manager in her employer’s wealth management department.  In June 2018, a public news article on a social media site reported on the arrest of a local politician who allegedly drove a car through a crowd of demonstrators protesting the shooting death of Antwon Rose, Jr., a young, African-American male, by police officers.  Plaintiff publicly commented on the article under her own social media account, “[t]otal BS.  He should have taken a bus to plow thru.”  Plaintiff’s social media account publicly stated that she was employee of the bank.

The bank was not monitoring plaintiff’s social media account and was not aware of the post until offended users of the social media platform flooded the bank, and even its executive officers, with complaints.  Plaintiff was terminated after an investigation that found her post violated the bank’s conduct and social media policies.

The District Court agreed that plaintiff violated the bank’s policies and granted summary judgment in its favor.  In doing so, it rejected plaintiff’s attempts to point to African-American employees who were not terminated for their social media posts.  The Court specifically found those individuals were not similarly situated because, among other things, their posts did not advocate violence, were not made in the comments section of a public news story, and did not result in a “public outcry.” The Third Circuit affirmed the dismissal and agreed the alleged comparators were not similarly situated.  The Court specifically agreed plaintiff’s post was far more egregious than those of the alleged comparators and was far more likely to harm to the bank’s reputation.

Over the past few years, states around the country have enacted laws limiting an employer’s ability to access the personal social media accounts of job applicants and employees. However, these laws generally do not prohibit employers from conducting certain investigations, such as to ensure compliance with state or federal laws, regulatory requirements or prohibitions against work-related employee misconduct based on the receipt of specific information about activity on an employee or applicant’s personal online account. Employers also may monitor, review, access or block electronic data stored on an electronic communications device paid for, in whole or in part, by the employer, or traveling through or stored on the employer’s network.

When companies are faced with adverse social media activity or campaigns, whether it be by employees, customers, bloggers, etc., they frequently are unprepared to take the appropriate steps to investigate, or to weigh the legal, business, reputational, and related risks in deciding what actions, if any, to take. For this reason, it is important to have a clear workplace social media policy in place to help prevent the likelihood of an incident or at least limit its impact. But while courts and the National Labor Relations Board (NLRB) seem to be employer friendly of late in approval of such policies, it is important to tread carefully, aiming to develop a policy that achieves the company’s legitimate business interests without compromising its employees’ right to privacy under statutory and common law and rights related to freedom of speech. Employers should continue to exercise care  when addressing and/or responding to their employees’ social media usage.  Jackson Lewis attorneys are available to assist with those and other issues and formulate preventative strategies that mitigate risk.

Today, the U.S. Department of Labor’s Employee Benefits Security Administration (EBSA) issued much anticipated cybersecurity guidance for employee retirement plans. This comes more than four and a half years after the ERISA Advisory Council, a 15-member body appointed by the Secretary of Labor to provide guidance on employee benefit plans, shared with the federal Department of Labor some considerations concerning cybersecurity. The essence of today’s guidance:

Responsible plan fiduciaries have an obligation to ensure proper mitigation of cybersecurity risks.

What that obligation means at this point is at least what EBSA set out in the following materials on its website, although the “Online Security Tips” are directed more to plan participants than plan fiduciaries:

Acknowledging ERISA-covered plans hold “millions of dollars or more in assets and maintain personal data on participants,” EBSA’s guidance lists a range of best practices for use by plan recordkeepers and service providers responsible for plan-related IT systems and data, as well as plan fiduciaries having the duty to make prudent decisions when evaluating and selecting plan service providers. Some of the EBSA’s best practices include:

  • Maintain a formal, well documented cybersecurity program.
  • Conduct prudent annual risk assessments.
  • Implement a reliable annual third-party audit of security controls.
  • Follow strong access control procedures.
  • Ensure that any assets or data stored in a cloud or managed by a third-party service provider are subject to appropriate security reviews and independent security assessments.
  • Conduct periodic cybersecurity awareness training.
  • Have an effective business resiliency program addressing business continuity, disaster recovery, and incident response.
  • Encrypt sensitive data, stored and in transit.

The EBSA fleshes out each of these best practices to give recordkeepers, service providers, and plan fiduciaries more guidance when developing their own policies and procedures. It is worth noting these best practices are not dissimilar to other, well-known frameworks designed to protect personal data. So, organizations that have engaged in efforts to comply with, for example, the HIPAA privacy and security rules for group health plans, the Massachusetts data security regulations, or the NY SHIELD Act will have a head start taking similar steps concerning their retirement plans and/or their services to plans.

Selecting ERISA plan service providers has long been an important fiduciary function for plan fiduciaries. In its guidance, EBSA offers key cybersecurity issues to account for when selecting service providers, including the following:

  • Ask about the service provider’s information security standards, practices and policies, and audit results, and compare them to the industry standards adopted by other financial institutions. Plan sponsors may assume that a service provider referred from a trusted source with compelling marketing materials would have put in place appropriate cybersecurity safeguards. As the saying goes, “Trust, but verify.” This also applies to all third-party plan providers, even large, well-known organizations.
  • Ask the service provider how it validates its practices, and what levels of security standards it has met and implemented. Look for contract provisions that give you the right to review audit results demonstrating compliance with the standard.
  • Ask whether the service provider has experienced past security breaches, what happened, and how the service provider responded. As these incidents are often reported, consider reviewing news accounts of the service provider’s response to the incident.
  • Investigate whether the service provider might have cyber insurance that would cover losses caused by cybersecurity and identity theft breaches, including misconduct by the service provider’s own employees or contractors, or a third party hijacking a plan participant’s account.
  • Consider the willingness of the service provider to include contract terms requiring ongoing compliance with cybersecurity, clear rules concerning use and disclosure of personal information, responsibility for security breaches, and other key terms addressing exposure to the plan, plan sponsor, and participants.

It is important to note that no set of safeguards will prevent all data breaches and no amount of due diligence will result in the selection of a flawless service provider. In many cases, a data breach experienced by a plan service provider may not warrant moving away from that provider. Here are some reasons why.

Third-party plan service providers and plan fiduciaries should begin taking reasonable and prudent steps to implement safeguards that will adequately protect plan data. EBSA’s guidance should help the responsible parties get there, along with the plan fiduciaries and plan sponsors’ trusted counsel and other advisors.

The Biden administration reportedly has called for all people at least 18 to be eligible for the COVID-19 vaccine by April 19, 2021, two weeks earlier than its prior goal of May 1, and less than a week away. Most states have already done so. Without the barriers created by state-by-state priority rules, the rate of vaccinations is likely to increase, hopefully helping to contain a fourth wave in COVID-19 cases observed in recent weeks.

No more confusing rules, President Biden

A BenefitsPro article cites a 2017 survey from the Society for Human Resource Management (SHRM) that found almost 60 percent of employers offer on-site flu vaccinations. Naturally, with expanding availability of COVID-19 vaccination doses and widespread eligibility, organizations are asking whether setting up an on-site COVID-19 vaccination program is more involved than one offering flu shots. The short answer is yes.

The country continues to operate under a national emergency due to a pandemic, not present during a typical flu season. Accordingly, concerns about safety and minimizing spread are significantly amplified. Individuals tend to be familiar with flu vaccines, not so with the current COVID-19 vaccines. Concerns over the emergency use authorization status of the COVID-19 vaccine, privacy, individual rights, school openings and childcare, effects on continued employment, liability, and so on are apparently not as prominent when getting an annual flu shot.

Taking those and other concerns into account, organizations considering setting up an on-site COVID-19 vaccination program have several issues to consider. Some of my colleagues and I assembled a nonexhaustive list of some of those issues (see our complete article here):

  • Getting Organized
  • Vaccine Administration and Reporting
  • Facility Suitability and Preparedness
  • Liability
  • Communications
  • Employment Issues
  • Privacy and Data Security

There is quite a bit to think about when setting up a COVID-19 vaccination program. While flu vaccination programs likely differ, prior experience with health fairs and flu vaccination offerings can be helpful reference points. Having a good team in place, careful planning, and the support and collaboration of an LHD or TPHCP, among other things, will help lead to a successful program.

COVID-19 drove many formerly in-person interactions onto a variety of video conferencing platforms.  But as millions of vaccinations are administered each day, and case numbers decline, it’s now possible to imagine and plan for the time when conducting business over video will no longer be mandatory.

For many organizations, though, COVID-19 has led to an epiphany that will very likely outlast the pandemic: Many aspects of work can be conducted remotely, without any drop in productivity and with enormous advances in convenience and geographic reach.

An organization based in Chicago, for instance, no longer needs to limit its pool of job candidates to those willing to relocate to that city, and no longer needs to fly candidates in – at great expense – for in-person interviews.  Instead, the organization can expand the scope of its search to include candidates who live – and plan to remain – in distant locations like Austin, Denver, Miami, and Nashville, and can interview those candidates by video conference.

What’s more, video conferencing platforms allow an organization to record those interviews, thereby potentially reducing biases and errors in its interview processes by creating far more reliable records of what transpired during each interview.  The benefits don’t end there.  The organization can then use its archive of video interviews to evaluate which interview styles and questions were most effective in screening candidates and can use the videos to train its staff on best practices for conducting future interviews.

But there’s a catch: In addition to potential concerns that the recordings may create unhelpful or even harmful “evidence,” video recording job interviews may also expose organizations to significant data privacy and security risk – risk which can and must be managed through thoughtful policies and procedures.

Risks

  1. Candidates in other states or countries may bring their jurisdictions’ data privacy and security obligations with them. Many data privacy and security laws are tied to the location or residence of the data subject (e.g., the job candidate); not the location of the data controller (e.g., the organization conducting the search).  If your organization records interviews of candidates residing in California or the EU, for instance, it may be subject to obligations under the CCPA or GDPR, respectively.  Both of these laws generally require the provision of certain privacy notices and, in the case of the GDPR, grant to data subjects an expansive set of rights related to the collection, use, disclosure, and retention of their data.  (Beginning in January 2023, when a new California law, the CPRA, takes effect, California candidates will have similarly expansive rights.)
  2. Interview recordings will likely contain far more personal information than the notes or memos generated during or after in-person interviews. Interview discussions can be wide-ranging, often touching on subjects that may qualify as personal information under applicable law – including information that would rarely make it into written records of that discussion.  For instance, even if not asked, the candidate might discuss her own or a family member’s medical condition, or she might directly or indirectly indicate her religious affiliation or sexual orientation.  And even when discussion focuses on more mundane topics – like educational and work histories – the information collected may trigger privacy obligations under expansive privacy regimes like the CCPA, CPRA, and GDPR.
  3. Complying with purpose limitations. The CCPA and GDPR require organizations to disclose to data subjects the purposes for which their personal information is used.  And, in the case of the GDPR, the organization may be required to assess whether its own purposes for using the personal information may be overridden by competing interests of the data subject.  The obvious, likely unobjectionable, purpose for recording a video interview is to better evaluate the candidate at issue.  But if the organization subsequently decides to use the recording for training or marketing, it could incur obligations to provide additional disclosures, obtain additional consent, and/or conduct additional analysis.
  4. Ensuring all parties consent. About a dozen US states require consent of both parties to record a conversation.  An organization conducting interviews by video conference must therefore be mindful that, prior to recording the interview, it should obtain consent from both the candidate and the employees involved in conducting the interview.
  5. Ensuring video interviews are adequately secured. Data breaches have become an enormous source of liability for most organizations.  It is not unusual for breaches to stem from systems or databases that an organization overlooked when designing its data security program because they weren’t obvious repositories of sensitive information.  An archive of interview videos could easily fall into that category.

Mitigation Strategies

  1. Conduct scope analysis. Given the proliferation of data privacy and security laws – Virginia recently passed an expansive new privacy law, and Colorado, Florida, New York, and other states may soon follow suit – and the fact that many of these laws are tied to the location or residence of the data subject, determining which laws will govern your organization’s recording of video interviews is a critical first step.
  2. Ensure you provide requisite privacy notices. If applicable, based on your organization’s scope analysis, provide privacy notices to interviewees prior to their interview.  Where the CCPA applies, for instance, your organization will likely need to provide a “notice at collection” to candidates, disclosing to them the categories of personal information that your organization collects about job applicants and the purposes for which it uses that information.
  3. Prepare to respond to requests for access, deletion, and rectification. If the GDPR applies, candidates may be entitled to request that your organization grant them access to their interview recordings, that it delete those recordings, or that it permit candidates to correct inaccurate information in the recordings.  In California – the CPRA – will begin imposing similar requirements when it takes effect.
  4. Collect requisite consent. Your organization will, in most instances, be able to address applicable obligations to obtain consent to record video interviews by taking two relatively simple steps.  First, it should develop a policy placing all employees who conduct video interviews on notice that those interviews will be recorded and collect from each employee an acknowledgment of receipt of that notice.  Second, it should train applicable employees to advise candidates at the start of each interview that the interview will be recorded for specified purposes (e.g., to improve the quality of the organization’s interview processes).
  5. Develop policies and procedures to ensure proper use, disclosure, security, and retention. To comply with the GDPR, CCPA, and other data privacy and security laws, your organization should  ensure that it has policies and procedures in place to regulate how interview recordings are used, who has access to them, to whom they’re disclosed, where they’re stored, and how long they’re kept.  For instance, your organization may need to develop policies to prevent the use of interview recordings for purposes not previously disclosed; to restrict access to the recordings to employees with a legitimate need; to limit disclosure of the recordings to trusted third-parties with whom it has proper contractual protections in place; and to ensure the recordings are securely destroyed in accordance with the organization’s record retention policy.

With good reason, many organizations are intrigued by the prospect of recording video interviews – along with other video communications – for future use.  For organizations engaging in this practice, or planning to, however, it’s important to be mindful of the associated risks.  These risks will not, in most instances, be prohibitive, but they require careful consideration and the implementation of thoughtful mitigation strategies.

Utah Military and Veteran Benefits | The Official Army Benefits WebsiteIn mid-March, Utah Governor Spencer Cox signed into law the Cybersecurity Affirmative Defense Act (HB80) (“the Act”), an amendment to Utah’s data breach notification law, creating several affirmative defenses for persons (defined below) facing a cause of action arising out of a breach of system security, and establishing the requirements for asserting such a defense.

In short, the Act seeks to incentivize individuals, associations, corporations, and other entities (“persons”) to maintain reasonable safeguards to protect personal information by providing an affirmative defense in litigation flowing from a data breach. More specifically, a person that creates, maintains, and reasonably complies with a written cybersecurity program that is in place at the time of the breach will be able to take advantage of an affirmative defense to certain claims under the Act:

  • A claim alleging that the person failed to implement reasonable information security controls that resulted in the breach of system security.
  • A claim that the person failed to appropriately respond to a breach of system security.
  • A claim that the person failed to appropriately notify an individual whose personal information was compromised in a breach of security.

The written cybersecurity programs must satisfy several requirements to warrant the Act’s protection. In part, such programs must provide administrative, technical, and physical safeguards to protect personal information. These safeguards include:

  • being designed to:
    • protect the security, confidentiality, and integrity of personal information;
    • protect against any anticipated threat or hazard to the security, confidentiality, or integrity of personal information; and
    • protect against a breach of system security.
  • reasonably conforming to a recognized cybersecurity framework (see below); and
  • being of an appropriate scale and scope in light of several factors (e.g. size/complexity of the business, the business’s nature/scope, sensitivity of the information protected, etc.)

Reasonably conforming to a recognized cybersecurity framework generally means (i) being designed to protect the type of information involved in the breach of system security, and (ii) either (I) constituting a reasonable security program as described in the Act; (II) reasonably conforming to an enumerated security framework, such as the NIST special publication 800-171 or the Center for Internet Security Critical Security Controls for Effective Cyber Defense; or (III) reasonably complying with the federal or state regulations applicable to the personal information obtained in the breach of system security (e.g., complying with HIPAA when “protected health information” is breached).

A person may not claim an affirmative defense, however, if:

  • The person had actual notice of a threat or hazard to the security, confidentiality, or integrity of personal information;
  • The person did not act in a reasonable amount of time to take known remedial efforts to protect the personal information against the threat or hazard; and
  • The threat or hazard resulted in the breach of system security.

Utah is the second state to establish an affirmative defense to claims arising from a data breach.  Back in 2018, Ohio enacted the Ohio Data Protection Act (SB 220), similarly providing a safe harbor for businesses implementing and maintaining “reasonable” cybersecurity controls.

This affirmative defense model established by both Utah and Ohio is a win for both companies and consumers, as it incentivizes heightened protection of personal data, while providing a safe harbor from certain claims for companies facing data breach litigation.   It would not be surprising to see other states take a similar approach.  Most recently, the Connecticut General Assembly reviewed HB 6607, “An Act Incentivizing the Adoption of Cybersecurity Standards for Businesses”, which provides for a similar safe harbor as in Utah and Ohio.  Creating, maintaining, and complying with a robust data protection program is a critical risk management and legal compliance step, and one that might provide protection from litigation following a data breach.