When California voters approved Proposition 24, the California Privacy Rights Act (CPRA), on November 3, 2020, the result was to substantially amend the California Consumer Privacy Act (CCPA) which became effective only 10 months earlier. We outlined the basic rules for determining when the CCPA applies, and summarize here the changes made by the CPRA.

Some of the requirements for the CCPA to apply remain the same, namely that a “business” (i) do business in the State of California, (ii) collect personal information (or on behalf of which such information is collected), and (iii) alone or jointly with others determines the purposes or means of processing of that data. However, a “business” under the CCPA also must satisfy at least one of three additional requirements which were modified by the CPRA as follows:

CCPA

CCPA as amended by CPRA

The business has annual gross revenue in excess of $25 million.

 

The annual revenue requirement is satisfied if, as of January 1 of a calendar year, the business had annual gross revenues in the preceding calendar year in excess of $25 million.

The business “alone or in combination, annually buys, receives for the business’s commercial purposes, sells, or shares for commercial purposes, alone or in combination, the personal information of 50,000 or more consumers, households, or devices

 

The business “alone or in combination, annually buys, sells, or shares the personal information of 100,000 or more consumers or households

 

The business derives 50 percent or more of its annual revenues from selling consumers’ personal information.

 

The business derives 50 percent or more of its annual revenues from selling or sharing consumers’ personal information.

 

In addition to businesses that meet the requirements referenced above, the CCPA also applies to any entity that controls or is controlled by such a business and shares common branding with that business. However, the CPRA clarified when these rules apply. First, it is not enough to share common branding, the business also must share personal information with the entity controlling or under the control of the business. Second, under the CPRA, sharing “common branding” does not mean simply a shared name, servicemark, or trademark, but when doing so would cause the average consumer to understand that the entities are commonly owned.

The CPRA also adds a third category that would be a “business” for purposes of these rules:

A joint venture or partnership composed of businesses in which each business has at least a 40 percent interest.

In this case, the joint venture or partnership itself, and each business that composes the joint venture or partnership will be considered a single business. Notably, personal information in the possession of each business and disclosed to the joint venture or partnership may not be shared with the other business.

Persons to whom a business makes personal information available or who process or receive personal information from or on behalf of a business. The CPRA made substantial changes to the rules that apply to persons that work with covered businesses to receive and process personal information. For instance, the CPRA added a new category, “contractor,” which is a person to whom the business makes available a consumer’s personal information for a business purpose. A discussion of these rules is beyond the scope of this post, but businesses will need to better understand the relationships they have with unrelated “persons” that receive and/or process personal information from or on behalf of the business. This includes making sure such activity is pursuant to a written contract that satisfies certain requirements.

 

Businesses (and their service providers and contractors) should be reviewing the changes made by the CPRA to determine whether the CCPA, as modified, applies to them. Each of these entities could face administrative fines of not more than $2,500 for each violation, and not more than $7,500 for each intentional violation or violations involving the personal information of consumers whom the business, service provider, contractor, or other person has actual knowledge are under 16 years of age.

 

For the past several years, thousands of businesses have been hit with phishing scams during tax season. Through these social engineering scams, hackers obtain employee Forms W-2 for filing fraudulent tax returns seeking large refunds. These phishing emails are typically sent as clients begin the process of issuing W-2s to employees.  Often employers do not know the scam has occurred until it is too late. The consequences from a successful W-2 phishing scam can extend well beyond leaked data, and may include potential employee class action litigation.

With the tax season quickly approaching, it’s worth re-visiting W-2 phishing email scams and describing steps an employer can take to help avoid them. The cyber-scam consists of an e-mail sent to an HR or Accounting department employee, presumably from an executive or “higher-up” within the organization. Both the TO and FROM e-mail addresses are legitimate internal addresses, as are the “sender” and recipient names. The fake e-mail asks the employee to forward the company’s W-2 forms, or related tax data, to the “sender.” This request aligns with the job responsibilities of both the employee and the supposed internal “sender.” Despite its appearance, the e-mail is a fake. The scammer is “spoofing” the company executive’s identity. In other words, the cyber-criminal is assuming the executive’s identity and e-mail address for the purpose of sending what appears to be a legitimate request for sensitive company information. The unsuspecting employee relies on the accuracy of the sender e-mail address, coupled with the sender’s job title and role, and forwards the confidential W-2 information. The information goes to a hidden e-mail address controlled by the cyber-criminal.

If successful, the cyber-criminal obtains a trove of sensitive employee data that can include names, addresses, salary information, social security numbers, and well as employer information needed for tax filings. The information is used to file fake individual tax returns (Form 1040) which generate fraudulent tax refunds, or it is sold on the dark web to identity thieves.

This cyber-scam is form of ‘spear phishing’ known as business email compromise (BEC) attacks, or CEO spoofing. Spear phishing attacks target a specific victim by using personal or organizational information to earn the victim’s trust. The cyber-criminal uses information such as personal and work e-mail addresses, job titles and responsibilities, names of friends and colleagues, personal interests, etc. to lure the victim into providing sensitive or confidential information.  Quite often, the scammer culls this information from social media, LinkedIn, and corporate websites. The method is both convincing and highly successful.

While an organization can use firewalls, web filters, malware scans or other security software to hinder spear phishing, experts agree the best defense is employee awareness. This includes ongoing security awareness training for all levels of employees, simulated phishing exercises, internal procedures for verifying transfers of sensitive information, and reduced posting of personal information on-line.

In the event your business falls victim to a W-2 phishing scam, it will need to respond quickly. This may require (i) investigating the nature and scope of the attack, (ii) ensuring the attackers are no longer in the business’s systems, (iii) determining whether the business must notify  individuals and state agencies of the data loss under applicable state law, and extend ID theft and credit monitoring services, (iv) notifying the IRS of a W-2 data loss at dataloss@irs.gov, (v) reporting the phishing email to the IRS at phishing@irs.gov and the Internet Crime Complaint Center of the FBI, as well as state taxing authorities, and (vi) helping employees with any questions about rectifying  their tax returns.

A W-2 e-mail phishing scam can have a devastating impact on a business and its employees. This year presents increased challenges for employers trying to guard against these scams. Due primarily to vulnerabilities created by COVID-19, social engineering attacks designed to compromise employee accounts or credentials have proliferated. The FBI cautions that cyber criminals are trying to obtain employees’ credentials regardless of their position within the company. With tax season upon us, expect to see more creative attempts to bait your personnel.

A key tech initiative as COVID-19 vaccinations begin rolling out are digital health passports. One example is being developed by a group of large tech companies along with the Mayo Clinic as part of the Vaccination Credential Initiative. The Initiative’s digital vaccination record will likely be a smartphone app. The Initiative is leveraging the CommonPass app, which is already being used by airlines to allow passengers to show a negative COVID-19 test result, which is a requirement to board certain flights.

A goal of digital health passports is to establish universal standards to verify whether a person has had a vaccination. Such digital health passports will become important as governments and major airlines require proof of either negative COVID testing, or eventually of vaccinations.  For example, effective January 26, 2021, all air passengers arriving to the U.S. from a foreign country must provide proof of a negative test result or documentation that they have recovered from COVID-19 prior to boarding the flight.

A key aspect in the development of digital health passports is ensuring data security. The system is designed as a digital wallet, allowing individuals to have control over who they share their information with. However, the data still moves between multiple systems and users must maintain proper data safeguards on their device to ensure the data is protected.

See our blog post about other COVID related technologies and associated legal issues here. Reach out to any member of the Privacy, Data, and Cybersecurity Group, or your Jackson Lewis contact, if you have any questions or need help in this area.

Enacted in 2008, the Illinois Biometric Information Privacy Act, 740 ILCS 14 et seq. (the “BIPA”), went largely unnoticed until a few years ago when a handful of cases sparked a flood of class action litigation over the collection, use, storage, and disclosure of biometric information. Seeing thousands of class action lawsuits, organizations have reevaluated and redoubled their compliance efforts. On January 28, 2021, a complaint was filed in Cook County, IL, Melvin v. Sequencing, LLC, alleging violations of the Illinois Genetic Information Privacy Act, 410 ILCS 513/1 – the “GIPA”…try not to get confused… which was originally effective in 1998.

Will the GIPA follow the BIPA?

The GIPA creates a private right of action using the same language as the BIPA:

Any person aggrieved by a violation of this Act shall have a right of action in a State circuit court or as a supplemental claim in a federal district court against an offending party.

However, while the BIPA provides for liquidated damages of $1,000 for each negligent violation and $5,000 for each intentional or reckless violation (or actual damages, if greater), the liquidated damages provisions under the GIPA are significantly higher: $2,500 and $15,000, respectively. If the holding of the Illinois Supreme Court in Rosenbach v. Six Flags Entertainment Corp., No. 123186 (Ill. Jan. 25, 2019) with regard to the BIPA is applied to the GIPA, plaintiffs could potentially maintain a cause of action and seek liquidated damages resulting from alleged violations of the GIPA, without any showing of actual injury beyond his or her rights under the Act.

Of note, in Sekura v. Krishna Schaumburg Tan, Inc., 2018 IL App (1st 180175), the Illinois Appellate Court for the First Judicial District noted, in a pre-Rosenbach BIPA case, that the GIPA “provide[s] for a substantially identical, ‘any person aggrieved’ right of recovery” as the BIPA.  The First District noted that the GIPA was considered and amended during the same legislative session when the BIPA was passed, suggesting that the legislature intended a similar framework to apply to both statutes.

So, what are some of the requirements of the GIPA?

The GIPA is largely based on the federal Genetic Information Nondiscrimination Act (the “GINA”) and incorporates several terms and concepts from the Privacy Rule under the Health Insurance Portability and Accountability Act (the “HIPAA”). This includes the definition of the term “genetic information” which is defined under HIPAA Reg. 45 CFR 160.103 and includes the manifestation disease in a family member, which includes one’s spouse. GIPA also includes requirements applicable to genetic testing companies, health care providers, business associates, insurers, and employers.

While not an exhaustive list of requirements, in general, under GIPA:

  • Genetic testing and information derived from genetic testing is confidential and privileged and may be released only to the individual tested and to persons specifically authorized, in writing in accordance with Section 30 of GIPA, by that individual to receive the information.
  • An insurer may not seek information derived from genetic testing for use in connection with a policy of accident and health insurance.
  • An insurer shall not use or disclose protected health information that is genetic information for underwriting purposes. Examples of “underwriting purposes” include: (i) determining eligibility (including enrollment and continued eligibility) for benefits under the plan, coverage, or policy (including changes in deductibles or other cost-sharing mechanisms in return for activities such as completing a health risk assessment or participating in a wellness program), (ii) the computation of premium or contribution amounts under the plan, coverage, or policy (including discounts in return for activities, such as completing a health risk assessment or participating in a wellness program); and (iii) other activities related to the creation, renewal, or replacement of a contract of health insurance or health benefits.
  • Companies providing direct-to-consumer commercial genetic testing are prohibited from sharing any genetic test information or other personally identifiable information about a consumer with any health or life insurance company without written consent from the consumer.
  • Employers must treat genetic testing and genetic information consistent with the requirements of federal law, including but not limited to the GINA, the Americans with Disabilities Act, Title VII of the Civil Rights Act of 1964, the Family and Medical Leave Act of 1993, the Occupational Safety and Health Act of 1970, and certain other laws.
  • Employers may permit the disclosure of genetic testing information only in accordance with the GIPA.
  • Employers may not (i) solicit, request, require or purchase genetic testing or genetic information of a person or a family member of the person, or administer a genetic test to a person or a family member of the person as a condition of employment; (ii) affect the terms, conditions, or privileges of employment, or terminate the employment of any person because of genetic testing or genetic information with respect to the employee or family member; or (iii) retaliate against any person alleging a violation of this Act or participating in any manner in a proceeding under the GIPA.
  • Employers cannot use genetic information or genetic testing for workplace wellness programs benefiting employees unless (1) health or genetic services are offered by the employer, (2) the employee provides written authorization in accordance with the GIPA, (3) only the employee (or family member if the family member is receiving genetic services) and the licensed health care professional or licensed genetic counselor involved in providing such services receive individually identifiable information concerning the results of such services, and (4) any individually identifiable information is only available for purposes of such services and shall not be disclosed to the employer except in aggregate terms that do not disclose the identity of specific employees. Employers can not penalize employees who do not disclose their genetic information or choose not to participate in a program requiring disclosure of the employee’s genetic information.

Whether an organization is a health care provider, a genetic testing companies, an employer, or other company subject to the GIPA, it should review its policies and practices concerning genetic tests and genetic information. In Melvin v. Sequencing, LLC, the plaintiff alleges his genetic information was disclosed without his authorization. Based on our preliminary research we could find no other cases addressing violations of the GIPA, so this may be a sign of more to come.  Note also that Illinois is not the only state with laws protecting genetic information.

Form W-2: Understanding Your W-2 FormIn recent years, there has been an uptick of W-2 phishing scams, and their consequences for an employer extend well beyond leaked data, including potential employee class action litigation.   Just last week, a federal court in Illinois rejected a motion for class certification in a data breach case alleging disclosure of employees’ sensitive tax information and additional personal information, in McGlenn v. Driveline Retail Merch.

A W-2 phishing scam, is a simple cyberattack, but can be highly successful.  It consists of a phishing e-mail sent to an employee, generally in the Human Resources or Accounting department, and designed to appear to come from an executive within the organization. The e-mail requests that the recipient forward the company’s W-2 forms, or related data, to the sender. This request aligns with the job responsibilities of both parties to the email. Despite appearances, the e-mail is a fraud. The scammer is “spoofing” the executive’s identity. The recipient relies on the accuracy of the sender’s e-mail address, coupled with the sender’s job title and responsibilities, and forwards the confidential W-2 information.

In McGlenn v. Driveline Retail Merch., an unknown person sent a phishing email to a Driveline employee in the payroll department. The email falsely identified the sender as the company’s Chief Financial Officer (CFO), and requested the employee send a copy of W-2 information for all Driveline employees.  According to the allegations, the employee provided the unknown person with W-2 information for nearly 16,000 employees including names, addresses, Social Security Numbers, and other personal identifying information (PII).

The plaintiff filed a putative class action against her employer, on behalf of other employees of the company, asserting several torts and state consumer protection violations.  The plaintiff claimed as a result of the data breach the class suffered damages due to: unauthorized use and misuse of their PII; the loss of opportunity to control how their PII is used; the diminution in value of their PII, the compromise/publication/theft of their PII; out of pocket costs associated with prevention, detection, recovery and remediation from identity theft or fraud; lost opportunity costs and wages associated with efforts expended and loss of productivity attempting to mitigate consequence of the breach; the “imminent and certain” impending injury flowing from potential fraud/theft; continued risk to their PII and more.

Standing in data breach class action litigation is a highly contested issue, as courts differ on whether a data breach victim must suffer actual financial harm to recover damages, or the mere threat of future harm is enough.  Federal circuit courts over the past few years have struggled with this issue, in large part due to lack of clarity following the U.S. Supreme Court’s decision in Spokeo, Inc. v. Robins which held that even if a statute has been violated, plaintiffs must demonstrate that an “injury-in-fact” has occurred that is both concrete and particularized, but which failed to clarify whether a “risk of future harm” qualifies as such an injury. For example, the 3rd6th, 7th,  9th  and D.C. circuits have generally found standing, while the 1st2nd4th and 8th circuits have generally found no standing where a plaintiff only alleges a heightened “risk of future harm”.

Here, in McGlenn, the court denied class certification for several independent reasons, among which, the court emphasized doubts of whether the class suffered an injury that was compensable. Moreover, the court was unsure whether the employer (Driveline) even owed the potential class members a duty to protect their PII, as Illinois does not have a common law duty for employers to safeguard employee PI.

While such a holding is considered a win for employers, it still is an indication of how far the consequences of a phishing scam can extend.  Even a case dismissed at an early stage will result in significant time and legal fees for the employer, not to mention damaging employee relations. Also, the result might have been different in another state, such as California. Under the California Consumer Privacy Act, California residents have a private right of action when their personal information is involved in a data breach due to the business’s failure to maintain reasonable safeguards. If successful, plaintiffs could each recover between $100 and $750, or actual damages whichever is greater.

An organization can use firewalls, web filters, malware scans or other security software to hinder phishing scams, however experts agree the best defense is employee awareness. This includes ongoing security awareness training for all levels of employees, simulated phishing exercises, internal procedures for verifying transfers of sensitive information, and reduced posting of personal information on-line.

For more information on W-2 phishing scams, check out some past blog posts:

Federal contractors know all too well the list of annual requirements and obligations can seem overwhelming at times.  One that may get overlooked by some is annual training requirements. A fairly new such training went into effect in 2017 – it requires certain federal contractors to do annual data privacy training.

According to the U.S. General Services Administration (“GSA”), for example, its agency-wide and role-based training offerings cover the GSA’s policies on protecting personally identifiable information (“PII”). The GSA requires all employees and contractors to complete privacy and security awareness training upon employment and each year thereafter. Importantly,

GSA account holders must complete this training in order to maintain access to the agency’s IT systems and resources such as email, Google Drive and other IT resources.

The current political landscape (President Biden has announced heighted focus in this area, including plans for $10B of investment in government cyber and IT infrastructure), the COVID-19 pandemic where many federal contractors are receiving large amounts of sensitive information, and recent high-profile data security incidents involving the U.S. government, like SolarWinds, provide further reasons to support a business imperative to bolster the privacy and security awareness of your workforce.  Therefore, we recommend following the below steps to ensure your teams are training in this critical area.

  1. Identify if requirements apply, and who needs training

In general, annual privacy training is required for any federal contractor employee who accesses, processes, or handles PII on behalf of a government agency. This includes contractor employees who have access to any system of government records, or who assist in designing, developing, maintaining, or operating a system of records. Prime contractors are required to flow down these privacy training requirements to their subcontractors.

PII is defined in this regulation as “information that can be used to distinguish or trace an individual’s identity, either alone or when combined with other information that is linked or linkable to a specific individual.”

Per the FAR, as noted above, contractor employees may not have access to PII unless they have had the required privacy training.

  1. What must training include

Per the FAR, training must address:

  • The contractors policies and procedures for processing and safeguarding of PII;
  • The provisions of the Privacy Act of 1974, including penalties for violations of the Act;
  • The authorized and official use of a system of records;
  • The restriction on the unauthorized use, handling disclosure, or access of PII or a system of records; and
  • The procedures to follow in the event of a suspected or confirmed breach of a system of records or PII.
  1. Understanding the requirements

A one-size-fits-all training likely will not be sufficient as the FAR requirements are described as “role based” and should be appropriate for different levels of employees. There should also be measures in place to test the knowledge of users. Contractors must also maintain and be able to provide documentation regarding the completion of the privacy training upon the request of their Contracting Officers.

  1. Format of training

Contractors may provide their own training to employees, except in the limited cases where an agency requires that certain training be utilized. Contractors can develop the content internally or use a third-party vendor or firm to do the training. Jackson Lewis provides this type of training to many of our government contractor clients.

  1. Recommended next steps for Government Contractors
  • Determine if your employees have access to PII as part of a government contract.
  • Review privacy procedures and policies to confirm compliance with training requirements.
  • If you are not currently training your employees in compliance with FAR 52.224-3, implement training program for employees handling PII.
  • Review subcontracts, as the privacy training requirements also apply to subcontractors.
  • Reach out to your local Jackson Lewis office with any questions.

In honor of Data Privacy Day, we provide the following “Top 10 for 2021.”  While the list is by no means exhaustive, it does provide some hot topics for organizations to consider in 2021.

  1. COVID-19 privacy and security considerations.

During 2020, COVID-19 presented organizations large and small with new and unique data privacy and security considerations. Most organizations, particularly in their capacity as employers, needed to adopt COVID-19 screening and testing measures resulting in the collection of medical and other personal information from employees and others. This will continue in 2021 with the addition of vaccination programs. So, for 2021, ongoing vigilance will be needed to maintain the confidential and secure collection, storage, disclosure, and transmission of medical and COVID-19 related data that may now include tracking data related to vaccinations or the side effects of vaccines.

Several laws apply to data the organizations may collect. In the case of employees, for example, the Americans with Disability Act (ADA) requires maintaining the confidentiality of employee medical information and this may include COVID-19 related data. Several state laws also have safeguard requirements and other protections for such data that organization should be aware of when they or others on their behalf process that information.

Many employees will continue to telework during 2021. A remote workforce creates increased risks and vulnerabilities for employers in the form of sophisticated phishing email attacks or threat actors gaining unauthorized access through unsecured remote access tools. It also presents privacy challenges for organizations trying to balance business needs and productivity with expectations of privacy. These risks and vulnerabilities can be addressed and remediated through periodic risk assessments, robust remote work and bring your own device policies, and routine monitoring.

As organizations work to create safe environments for the return of workers, customers, students, patients and visitors, they may rely on various technologies such as wearables, apps, devices, kiosks, and AI designed to support these efforts. These technologies must be reviewed for potential privacy and security issues and implemented in a manner that minimizes legal risk.

Some reminders and best practices when collecting and processing information referred to above and rolling out these technologies include:

  • Complying with applicable data protection laws when data is collected, shared, secured and stored including the ADA, Genetic Information Nondiscrimination Act, CCPA, GDPR and various state laws. This includes providing required notice at collection under the California Consumer Privacy Act (CCPA), or required notice and a documented lawful basis for processing under the GDPR, if applicable.
  • Complying with contractual agreements regarding data collection; and
  • Contractually ensuring vendors who have has access to or collect data on behalf of the organization implement appropriate measures to safeguard the privacy and security of that data.
  1. The California Consumer Privacy Act (CCPA) and the California Privacy Rights Act (CPRA)

On January 1, 2020, the CCPA ushered in a range of new rights for consumers, including:

  • The right to request deletion of personal information;
  • The right to request that a business disclose the categories of personal information collection and the categories of third parties to which the information was sold or disclosed; and
  • The right to opt-out of sale of personal information; and
  • The California consumer’s right to bring a private right of action against a business that experiences a data breach affecting their personal information as a result of the business’s failure to implement “reasonable safeguards.”

The CCPA carves-out (albeit not entirely) employment-related personal information from the CCPA’s provisions. It limits employee rights to notice of the categories of personal information collected by the business and the purpose for doing so, and the right to bring a private right of action against a business that experiences a data breach affecting their personal information.

In November, California voters passes the California Privacy Rights Act (CPRA) which amends and supplements the CCPA, expanding compliance obligations for companies and consumer rights. Of particular note, the CPRA extends the employment-related personal information carve-out until January 1, 2023. The CPRA also introduces consumer rights relating to certain sensitive personal information, imposes an affirmative obligation on businesses to implement reasonable safeguards to protect certain consumer personal information, and prevents businesses from retaliating against employees for exercising their rights.  The CPRA’s operative date is January 1, 2023 and draft implementation regulations are expected by July 1, 2022. Businesses should monitor CCPA/CPRA developments and ensure their privacy programs and procedures remain aligned with current CCPA compliance requirements.

In 2021, businesses can expect various states, including Washington, New York, and Minnesota to propose or enact CCPA-like legislation.

  1. Biometric Data

There was a continued influx of biometric privacy class action litigation in 2020 and this will likely continue in 2021. In early 2019, the Illinois Supreme Court handed down a significant decision concerning the ability of individuals to bring suit under the Illinois’s Biometric Information Privacy Act (BIPA). In short, individuals need not allege actual injury or adverse effect beyond a violation of his/her rights under BIPA to qualify as an aggrieved person and be entitled to seek liquidated damages, attorneys’ fees and costs and injunctive relief under the Act.

Consequently, simply failing to adopt a policy required under BIPA, collecting biometric information without a release or sharing biometric information with a third party without consent could trigger liability under the statute. Potential damages are substantial as BIPA provides for statutory damages of $1,000 per negligent violation or $5,000 per intentional or reckless violation of the Act. There continues to be a flood of BIPA litigation, primarily against employers with biometric timekeeping/access systems that have failed to adequately notify and obtain written releases from their employees for such practices.

Like many aspects of 2020, biometric class action litigation has also been impacted by COVID-19. Screening programs in the workplace may involve the collection of biometric data, whether by a thermal scanner, facial recognition scanner or other similar technology. In late 2020, plaintiffs’ lawyers filed a class action lawsuit on behalf of employees concerning their employer’s COVID-19 screening program, which is alleged to have violated the BIPA. According to the complaint, employees were required to undergo facial geometry scans and temperature scans before entering company warehouses, without prior consent from employees as required by law. More class action lawsuits of this nature are likely on the horizon.

The law in this area is still lagging behind the technology but starting to catch up. In addition to Illinois’s BIPA, Washington and Texas have similar laws, and states including Arizona, Florida, Idaho, Massachusetts and New York have also proposed such legislation. The proposed biometric law in New York would mirror Illinois’ BIPA, including its private right of action provision. In California, the CCPA also broadly defines biometric information as one of the categories of personal information protected by the law.

Additionally, states are increasingly amending their breach notification laws to add biometric information to the categories of personal information that require notification, including 2020 amendments in California, D.C., and Vermont. Similar proposals across the U.S. are likely in 2021.

A report released by Global Market Insights, Inc. in November 2020 estimates the global market valuation for voice recognition technology will reach approximately $7 billion by 2026, in main part due to the surge of AI and machine learning across a wide array of devices including smartphones, healthcare apps, banking apps and connected cars, just to name a few. Voice recognition is generally classified as a biometric technology which allows the identification of a unique human characteristic (e.g. voice, speech, gait, fingerprints, iris or retina patterns), and as a result voice related data qualifies biometric information and in turn personal information under various privacy and security laws. For businesses exploring the use of voice recognition technology, whether for use by their employees to access systems or when manufacturing a smart device for consumers or patients, there are a number of privacy and security compliance obligations to consider including the CCPA, GDPR, state data breach notification laws, BIPA, COPPA, vendor contract statutes, statutory and common law safeguarding mandates.

  1. HIPAA

During 2020, the Office of Civil Rights (OCR) at the U.S. Department of Health and Human Services was active in enforcing HIPAA regulations. The past year saw more than $13.3 million recorded by OCR in total resolution agreements. OCR settlements have impacted a wide array of health industry-related businesses, including hospitals, health insurers, business associates, physician clinics and mental health/substance abuse providers. Twelve of these settlements where under the OCR’s Right to Access Initiative, which enforces patients’ rights to timely access of medical records at reasonable cost. It is likely this level of enforcement activity will continue in 2021.

The past year produced a significant amount of OCR-issued guidance relating to HIPAA. In March OCR issued back-to-back guidance on COVID-19-related issues, first regarding the provision of protected health information (PHI) of COVID-19 exposed individuals to first responders, and next providing FAQs for telehealth providers. In July, the director of the OCR issued advice to HIPAA subject entities in response to the influx of recent OCR enforcement actions: “When informed of potential HIPAA violations, providers owe it to their patients to quickly address problem areas to safeguard individuals’ health information.” Finally in September, the OCR published best practices for creating an IT asset inventory list to assist healthcare providers and business associates in understanding where electronic protected health information (ePHI) is located within their organization and improve HIPAA Security Rule compliance, and shortly after it issued updated guidance on HIPAA for mobile health technology.

In December, Congress amended the Health Information Technology for Economic and Clinical Health Act to require the Secretary of Health and Human Services to consider certain recognized security practices of covered entities and business associates when making certain determination, and for other purposes. In 2021, businesses will want to review their information security practices in light of applicable recognized security practices in an effort to demonstrate reasonable safeguards and potentially minimize penalties in the event of a cybersecurity incident.

  1. Data Breaches

The past year was marked by an escalation in ransomware attacks, sophisticated phishing emails, and business email compromises. Since many of these attacks were fueled in part by vulnerabilities due to an increased remote workforce, 2021 will likely be more of the same. Continue Reading Top 10 for 2021 – Happy Data Privacy Day!

Recently, the National Labor Relations Board (NLRB), in a split decision 2-1, approved a California-based ambulance company’s implementation of a social media policy that prohibited employees from “inappropriate communications” related to the company.  The NLRB’s ruling reversed a decision by an administrative law judge, back in October 2019, that concluded that the company’s social media policy was overly broad and infringed on worker’s rights established in the National Labor Relations Act (NLRA).

Key aspects of the company’s workplace social media policy included:

  • Prohibition on disclosure of proprietary or confidential information of the employer or co-workers.
  • Limitations on an employee’s use of the employer’s name, logo, trademarks, or other symbols in social media to endorse, promote, denigrate or otherwise comment on any product, opinion, cause or person.
  • Prohibition on posting of photos of coworkers without their written consent.
  • Prohibition on use of social media to disparage the employer or others.
  • Prohibition of “inappropriate communications” generally on social media.
  • Prohibition of sharing of employee compensation information.

The majority highlighted that, “[t]he legitimate justifications for the respondent’s nondisparagement rule are substantial, and we find that they outweigh any potential adverse impact of the respondent’s facially neutral rule on protected rights”.

NLRB member Lauren McFerran, the only dissenter, emphasized that the decision “again illustrates how eager the board majority is to uphold employer rules, how unwilling it is to consider rules from an employee’s true perspective and how little weight it gives to the rights protected by our statute.”

Back in a 2017, in Boeing Company, the NLRB set out a new standard for determining whether a facially neutral work rule, reasonably interpreted, would unlawfully interfere with, restrain, or coerce employees in exercise of their NLRA rights.  In Boeing Company, the NLRB overruled the “reasonably construed” prong established in Lutheran Heritage Village-Livonia (2004), which held that a work rule that did not otherwise violate the NLRA would be found unlawful if employees would reasonably construe it to prohibit NLRA rights. Instead, the NLRB held in Boeing Company that, when evaluating a facially neutral policy, rule or handbook provision that, when reasonably interpreted, would potentially interfere with the exercise of NLRA rights, the Board will evaluate two things: (i) the nature and extent of the potential impact on NLRA rights, and (ii) legitimate justifications associated with the rule.

This evaluation system would, “strike the proper balance between . . . asserted business justifications behind the policies, on the one hand, and the invasion of employees’ rights in light of the Act and its policy.”

In the NLRB’s latest decision, analyzing the California ambulance company’s workplace social media policy, the NLRB relied on Boeing Company’s evaluation standard, and other NRLB decisions of late related to workplace social media policies.  For example, in July of 2020 the Board, citing Boeing Company, held in Motor City Pawn Brokers Inc  that “the work rules at issue fall squarely into the category of lawful, commonsense, facially neutral rules that require employees to foster “harmonious interactions and relationships” in the workplace and adhere to basic standards of civility.”

 Takeaway

When companies are faced with adverse social media activity or campaigns, whether it be by employees, customers, bloggers, etc., they frequently are unprepared to take the appropriate steps to investigate, or to weigh the legal, business, reputational, and related risks in deciding what actions, if any, to take.  For this is reason, it is important to have a clear workplace social media policy in place to help prevent the likelihood of such an incident or at least limit its impact.  But while the NLRB seems to be employer friendly of late in approval of such policies, it is important to tread carefully, aiming to develop a policy that achieves the company’s legitimate business interests without compromising its employees’ NLRA rights.  This is especially true as the NLRB’s current majority will change in summer 2021.

As employers continue to grapple with a safe return to the workplace, on January 21, the U.S. Center for Disease Control and Prevention (CDC) issued new guidance for businesses and employers on SARS-CoV-2 testing of employees, as part of a more comprehensive approach to reducing transmission of the virus in non-healthcare workplaces. While the CDC had already released some guidance on the matter of workplace testing (last updated in October), the CDC’s more recent guidance places a new emphasis on informed consent prior to testing and measures an employer can take to ensure employees are fully supported in their decision-making.

Specifically, the CDC’s guidance states:

Workplace-based testing should not be conducted without the employee’s informed consent. Informed consent requires disclosure, understanding, and free choice, and is necessary for an employee to act independently and make choices according to their values, goals, and preferences. (emphasis in original)

For employers that have required employees to submit to COVID-19 viral testing in order to enter the workplace consistent with EEOC guidance, the CDC’s reference to an informed consent may come as a bit of a surprise. However, while the CDC’s guidance appears to require informed “consent,” it does not appear to prevent employers from requiring testing as a condition of entering the workplace. The CDC’s guidance seems to clarify its position, recommending that employers provide employees:

complete and understandable information about how the employer’s testing program may impact employees’ lives, such as if a positive test result or declination to participate in testing may mean exclusion from work. (emphasis added)

When developing a SAR-CoV-2 testing program, according to the CDC, an employer should first address some basic considerations. For example – why is the employer offering the test to begin with, how frequently will employees be tested, how to effectively obtain employee consent, and what to do if an employee declines to be tested.

The CDC provides a list of key measures an employer should implement when developing an SAR-CoV-2 testing program in the workplace to ensure employee informed consent and a supportive environment:

  • Ensure safeguards are in place to protect an employee’s privacy and confidentiality.
  • As noted above, provide complete and understandable information about how the employer’s testing program may impact employees’ lives, such as if a positive test result or declination to participate in testing may mean exclusion from work.
  • Explain any parts of the testing program an employee would consider especially important when deciding whether to participate. This involves explaining the key reasons that may guide their decision.
  • Provide information about the testing program in the employee’s preferred language using non-technical terms. Consider obtaining employee input on the readability of the information. Employers can use this tool to create clear messages.
  • Encourage supervisors and co-workers to avoid pressuring employees to participate in testing.
  • Encourage and answer questions during the consent process. The consent process is active information sharing between an employer or their representative and an employee, in which the employer discloses the information, answers questions to facilitate understanding, and promotes the employee’s free choice.

In addition, in order to ensure informed consent, an employee must be provided certain disclosures regarding the workplace testing program. Of course, the disclosures must include those required in the U.S. Food and Drug Administration (FDA) emergency use authorization patient fact sheet external for the particular test, such as the type of the test, how the test will be performed, and known and potential risks.  Notably, these disclosures must be provided during the consent process, meaning employers will have to know this information and ensure it is provided employees prior to the employee agreeing to the test.

Employers will need to consider which aspects of the testing program may be more relevant than others to an employee’s decision whether to accept an offered test and include the appropriate disclosures. Areas to consider include the process for scheduling tests and how the cost of the tests will be covered, what employees should expect at the testing site (e.g., screening), recommended next steps if an employee tests positive, and what assistance is available should an employee be injured while the test is administered.

There are, of course, privacy and security issues to consider when implementing such a program. For example, an employer must consider what personal information the employee will need to provide to the test provider (e.g. name, DOB, insurance, etc.),  the test results to follow, and the myriad of issues that arise once that information is obtained. For example: Whether, where, and for how long the employer will retain the results? How will personal information be kept confidential and secure and how will the employer keep the results confidential and secure? Who will have access to the results?

The employee’s test results will be considered confidential medical information, and while not subject to HIPAA in the employer-employee context, this information still may have protections under state statutory and common law. Consider, for example, that several states, such as California and Florida, include “medical information” as part of the definition of “personal information” under their breach notification laws. Accordingly, if that information is breached, which could include access to the information by an unauthorized party, notification to impacted individuals and relevant state agencies may be required. Additionally, statutory and common law obligations exist requiring employers to safeguard employee personal information, which may include information about their physical health, such as test results or information provided by the employee before taking the test. Thus, maintaining reasonable safeguards to protect such information is prudent. This might include access management measures and record retention and destruction policies. It also may include having clear guidelines for making disclosures of this information and determining whether an authorization is needed before such information may be disclosed to, or accessed by, a third party.

The COVID-19 pandemic has completely reshaped workplace practices, and we have certainly entered a “new normal.”  Just earlier this week, we discussed on this blog the EEOC’s guidance on best practices for workplace identification of employees that have been vaccinated. And temperature and symptom screening protocols in the workplace have been mandated or recommended by nearly every state and city across the U.S. These measures are essential in halting the spread of the virus, and ensuring a safe and healthy workplace and workforce. Nevertheless, organizations must consider the legal risks, challenges, and requirements prior to implementation of such measures.

On December 8th, the Association of Corporate Counsel (ACC), which represents over 45,000 in-house counsel across 85 countries, announced the launch of its Data Steward Program (DSP) to help organizations and their law firms assess and share information about information security relating to client data. The DSP is two years in the making, collecting input from attorneys, cybersecurity and privacy experts and litigation support experts from corporations, law firms, vendors and government. The DSP, a voluntary-based program, creates a standardized framework for “assessing, scoring, benchmarking, validating and accrediting” a law firm’s stance regarding client data security leveraging existing data security frame works, such as the ISO or NIST, but also customizing “control selection, arrangement and compliance metrics” to meet a law firm’s specific needs.

The DSP was developed in response to the struggles corporations face in attempting to ensure that the law firms they utilize have adequate data security measures in place – a Fortune 500 company often has relationships with upwards of 500 law firms and vendors. Moreover, SMBs that utilize smaller sized law firms and vendors are often ill equipped to effectively perform data security related due diligence.

Of course, for all service providers, including law firms, it is critical to maintain reasonable administrative, physical, and technical safeguards when interacting with sensitive corporate and personal data of customers, as well as to ensure that adequate protections are in place to prevent and respond to data breaches. Law firms should not be surprised to see enhanced efforts, such as the DSP, to help assess those safeguards on a more consistent basis. Firms concerned about facing requests for assessments and/or maintaining their privacy and security protocols in an increasingly dynamic environment should review their cybersecurity risk management policies, procedures and practices sooner rather than later.

The ACC DSP has established a clear set of goals to help ensure the program’s success:

  • Exacting and Thorough Assessment
    • Requiring a “rigorous and thorough review” of a law firm’s data security status, detailed enough for both law firms and clients to make adequate business decisions. This is satisfied by “selecting and/or modeling controls” from established data security frameworks including ISO and NIST.
  • Value to All Participants
    • The DSP aims to ensure all relevant parties are involved in the standard setting process. “The balanced needs of all parties were represented (and will be maintained) by putting the DSP under the creative control of an ACC-sponsored working group of industry experts, including ACC Members, law firm partners, information security officers and CIOs, legal industry service providers and data security assessment firms who truly understand the issues and practices of the legal industry.”
  • Secure Platform
    • The DSP data-sharing program titled Data Steward Exchange or DSP-X operates on a third-party SaaS platform with “an established record of security and has recently passed its latest SOC-2 audit”.
  • Open Standard Benchmarking
    • The DSP algorithm for scoring is 100% transparent and available to all participants.
  • Accommodate Legal Practice Diversity
    • The practice standards established by the ACC Working Group were designed to be applicable to law firms across all sizes and specialties, and all law firms are invited to participate in the DSP.
  • Independent Assessor Neutrality
    • The DSP establishes that an ACC accredited assessor performing a review may not perform either data security prevention or remediation services for that participant six months prior to or following an accreditation validation, to ensure neutrality.

This is not the first time of late that the ACC has prioritized data security and privacy matters for in-house counsel and law firms. In 2017, the ACC released Model Information Protection and Security Controls for Outside Counsel Possessing Company Confidential Information (“the Model Controls”), data safety guidelines to help “in-house counsel as they set expectations with their outside vendors, including outside counsel.” The Model Controls addressed a broad range of data security related measures including: data breach reporting, data handling and encryption, physical security, employee background screening, information retention/return/destruction, and cyber liability insurance. The Model Controls were developed to serve as a “best practice” standardizing the protocols companies implement when interacting with third-party vendors who may have access to sensitive corporate data, and in many ways the DSP is a continuation of that initiative.

The DSP can be initiated in one of two ways: 1) a law firm can volunteer to participate and conduct a self-assessment, or 2) an ACC corporate member or prospective member can invite a law firm to participate. Even prior to launch, corporations were already inviting their law firms and legal vendors to undergo an assessment. 2020 has proven that data privacy and security risks must be prioritized across all industries.