The CCPA has reached the one-year mark. This is a good time for businesses to review the success of their compliance programs and recalibrate for the CCPA’s second year. Here are a few suggestions to kick off that review:

  1. Privacy Policies. The CCPA requires a business to update the information in its privacy policy or any California-specific description of consumers’ privacy rights at least once every twelve months. If a business has not already done so, now is a good time to review both online and offline data collection practices to ensure privacy policies accurately disclose, at a minimum, the categories of personal information (“PI”) it collected in the preceding 12 months, the categories of PI it sold in the preceding twelve months, and the categories of PI it disclosed for a business purpose in the last 12 months.

Given the challenges of the last several months, a business may be collecting PI beyond what it currently discloses in its privacy policies. For example, a company may need to update its privacy policies to disclose the collection and use of COVID-19 related screening information, biometric information, or PI collected as a result of remote work situations.

If the business needs to update its privacy policy to reflect additional data collection activities, it will likely need to update its “notices at collection”, including employee and job applicant privacy notices.

  1. Employee training. The CCPA provides that a business shall ensure all individuals responsible for handling inquiries on consumer rights, the businesses’ privacy practices, or its compliance with the CCPA are informed of applicable CCPA requirements. Businesses will want to review their training programs to ensure they now include appropriate CCPA related training; determine whether employee handbooks and manuals have been updated accordingly; and, document that relevant employees have received training.
  2. Reasonable Safeguards. The CCPA does not currently impose an affirmative obligation on a business to implement reasonable safeguards to protect consumer PI; however, it provides a consumer private right of action where the consumer’s PI has been involved in a data breach resulting from the business’s failure to implement reasonable security safeguards. As a best practice, a business will want to review whether it has performed a risk assessment, at least annually, to identify new or enhanced risks, threats, or vulnerabilities to its systems or the PI it collects or maintains; whether it has reviewed and updated its written information security program and data retention schedule; and whether it has practiced its incident response plan.

CCPA compliance is an ongoing activity and these three action items are particularly worthy of review at the one-year mark. However, further year-end review might also include an assessment of the business’s website’s accessibility; confirmation that service provider agreements have been amended to satisfy the CCPA, where appropriate; and all new service provider contracts include relevant CCPA provisions.

Although this post is focused on the CCPA, it is important to note that California recently passed the Consumer Privacy Rights Act (“CPRA”). The CPRA supplements and amends the CCPA. Two CPRA provisions are worth noting as they relate to items on this action item list. First, effective January 1, 2023, businesses will have an affirmative obligation to implement reasonable safeguards. Second, businesses will be required to disclose their collection and use of “sensitive personal information” and shall permit individuals to limit the business’s use of this information in certain circumstances. By adding these new provisions, the CPRA builds upon and expands the CCPA, inching it a bit closer to the EU General Data Protection Regulation.

The California Privacy Rights Act of 2020 (CPRA) becomes operative on January 1, 2023. Among its numerous amendments and additions to the existing California Consumer Privacy Act (CCPA), the CPRA expands the definition of Personal Information. Specifically, it adds the category of Sensitive Personal Information. This new category tracks the EU General Data Protection Regulation’s definition of Special Category Data, adds data elements commonly viewed in the U.S. as sensitive, and introduces a new twist by including the contents of a consumer’s mail, email, and text messages.

The CPRA broadly defines Sensitive Personal Information as Personal Information that is not publicly available and reveals:

  • a consumer’s social security, driver’s license, state identification card, or passport number;
  • a consumer’s account log-In, financial account, debit card, or credit card number in combination with any required security or access code, password, or credentials allowing access to an account;
  • a consumer’s precise geolocation;
  • a consumer’s racial or ethnic origin, religious or philosophical beliefs, or union membership;
  • the contents of a consumer’s mail, email and text messages, unless the business is the intended recipient of the communication;
  •  a consumer’s genetic data; and
  • the processing of biometric information for the purpose of uniquely identifying a consumer;
  •  personal information collected and analyzed concerning a consumer’s health; or
  • personal information collected and analyzed concerning a consumer’s sex life or sexual orientation.

The addition of this new category of Personal Information creates two primary obligations for businesses. First, a business will need to include Sensitive Personal Information in its notice at collection to consumers, including job applicants and employees, and in any online privacy policy or CA specific description of consumer rights. Under the CPRA, this notice must now also disclose the categories of Sensitive Personal Information to be collected, the purposes for which they will be used, whether this information will be sold or shared, and the length of time the business intends to retain each category of Sensitive Personal Information.

Second, when a business collects or processes Sensitive Personal Information for the purpose of “inferring characteristics” about a consumer, it may only do so to provide services or goods requested by the consumer, for limited purposes enumerated by the CPRA, and as authorized by future implementation regulations. If the business intends to use or disclose this information for any other purpose, it must provide the consumer with notice of the intended use or disclosure and the consumer’s right to limit this use or disclosure. To facilitate exercising this right, a business must provide the consumer with an opt out mechanism entitled “Limit the Use of My Sensitive Personal Information.” Sensitive Personal Information that is not collected or processed for the purpose of inferring a consumer’s characteristics is not subject to this right to limit its use or disclosure.

Although the GDPR and CPRA share similar definitions of sensitive data, there are two significant differences worth noting. The GDPR prohibits collecting and processing Special Category Data absent receiving the explicit, informed, affirmative (i.e., opt in) consent of the individual to do so, or pursuant to limited circumstances enumerated in the GDPR. In contrast, the CPRA permits collecting and processing Sensitive Personal Information. However, the consumer may limit (i.e., opt out of) the use and disclosure of this data when a business collects it for the purpose of inferring the consumer’s characteristics and will use or disclose it beyond what is necessary to provide requested service or goods to the consumer, and as narrowly permitted by the CCPA and any implementation regulations.

In anticipation of January 1, 2023, preparations should include revisiting or expanding existing data mapping activities to identify the collection of Sensitive Personal Information, reviewing the purpose for collecting this information and how the business uses or discloses it, and determining whether its use or disclosure is permitted or authorized by the CPRA. Similar to preparations for the CCPA, this will require an interdisciplinary team with a broad understanding of business operations. Any team should include members familiar with the business’ advertising, marketing, and website data collection activities to help identify where Sensitive Personal Information may be collected for the purpose of inferring consumer characteristics.

For additional information on the CPRA, please reach out to a member of the Jackson Lewis Privacy, Data and Cybersecurity practice group or check out our CPRA blog series:

A new report released by Global Market Insights, Inc. last month estimates that the global market valuation for voice recognition technology will reach approximately $7 billion by 2026, in main part due to the surge of AI and machine learning across a wide array of devices including smartphones, healthcare apps, banking apps and connected cars, just to name a few. Whether performing a quick handsfree search on your phone or car command while driving, voice recognition technology has enhanced the effortlessness of consumer use. Particularly in the wake of the COVID-19 pandemic, companies that may never have considered voice-recognition technology are now rethinking their employee access control systems, and considering touchless authorization technologies, like voice recognition, as the main form of entry into their workspace, as opposed to fingerprint scanners or keypads that increase the risk of germs or virus spreading.

But while the ease and efficiency of voice recognition technology is clear, the privacy and security obligations associated with this technology cannot be overlooked. Voice recognition is generally classified as a biometric technology which allows the identification of a unique human characteristic (e.g. voice, speech, gait, fingerprints, iris or retina patterns), and as a result voice related data qualifies biometric information and in turn personal information under various privacy and security laws. For businesses that want to deploy voice recognition technology, whether for use by their employees to access systems or when manufacturing a smart device for consumers or patients, there are a number of privacy and security compliance obligations to consider. Here are just a few:

  • EU’s General Data Protection Regulation (GDPR)
    • The GDPR, effective since May of 2018, classifies “voice” as “personal data”. While GDPR Article 4.1 which defines “personal data” does not specifically refer to “voice” but rather, “one or several properties unique to their physical, physiological identity…”, the European Data Protect Board has taken the position that “voice recognition” is an example of a physical or physiological biometric identification technique. For businesses that process the personal data of data subjects (EU residents), those data subjects are granted an array of rights (e.g. right to access, right to delete) along with significant privacy and security obligations on the controllers and processors of that data.
  • California Consumer Privacy Act (CCPA)
    • The recently enacted California Consumer Privacy Act(CCPA) may apply to a business that collects the personal data of a California resident, regardless of whether the organization is located in California. Under the Act, a covered business must provide a resident with information about its data collection practices including the personal information it collects, discloses, and sells, as well as the right to delete to this data and object to its sale. Notably, the Act prohibits an individual from waiving these rights.  The CCPA includes “biometric information” as an enumerated category of “personal information.”. In the Act’s definition of “biometric information” it states that “[b]iometric information includes, but is not limited to, imagery of the iris, retina, fingerprint, face, hand, palm, vein patterns, and voice recordings, from which an identifier template, such as a faceprint, a minutiae template, or a voiceprint, can be extracted”.
  • Biometric Information Privacy Act (BIPA)
    • The BIPA sets forth a comprehensive set of rules for companies doing business in Illinois when collecting biometric identifiers or information of state residents. The BIPA has several key features: • Informed consent prior to collection • Limited right of disclosure of biometric information • Written policy requirement addressing retention and data destruction guidelines • Prohibition on profiting from biometric data. The definition of “biometric identifiers” under the BIPA includes a “voiceprint” (using voice to verify an individual’s identity). Voiceprinting has been the subject of significant BIPA litigation of late, particularly in the context of virtual assistants. While these cases have been tossed for reasons unrelated to voiceprinting itself (e.g. lack of personal jurisdiction), as plaintiffs continue to expand the scope of BIPA targets, companies utilizing voiceprinting will increasingly face exposure to BIPA ligation.
  • Children’s Online Privacy Protection Act (COPPA)
    • Under COPPA there are strict consent requirements for collection and storage of data of children under 13. That said, in 2017, the Federal Trade Commission issued guidance on COPPA in the context of voice recordings, relaxing the rule a bit, “The Commission recognizes the value of using voice as a replacement for written words in performing search and other functions on internet-connected devices. Verbal commands may be a necessity for certain consumers, including children who have not yet learned to write or the disabled… as such when a covered operator collects an audio file containing a child’s voice solely as a replacement for written words, such as to perform a search or fulfill a verbal instruction or request, but only maintains the file for the brief time necessary for that purpose, the FTC would not take an enforcement action against the operator on the basis that the operator collected the audio file without first obtaining verifiable parental consent. Such an operator, however, must provide the notice required by the COPPA Rule, including clear notice of its collection and use of audio files and its deletion policy, in its privacy policy.” While the FTC has to-date not issued any COPPA violations in the context of voice recordings, its requirements should not be ignored.
  • State Statutory and Common Law Mandates to Safeguard Personal Data
    • Multiple states impose an affirmative duty to use reasonable measures to safeguard personal data that an organization collects or owns, which increasingly includes biometric information. The applicability of these laws may depend on the location of the organization’s facilities and the consumer/employee/patient’s state of residency. Many of these safeguarding laws provide a general framework for compliance, without mandating specific measures. However, “reasonable” generally implies safeguards appropriate to the sensitivity of the data, and one need only look to more robust data security frameworks, such as under HIPAA and the Massachusetts data security regulations, to get a sense of what safeguards may be appropriate. These statutory duties to safeguard are driving increased contractual obligations between businesses exchanging personal information to carry out the terms of the agreement. At the same time, some courts have identified common law duties to safeguard personal data.
  • State Mandates Regarding Data Destruction and Disposal
    • Currently, more than thirty states have data destruction and disposal laws. These laws require taking reasonable steps to securely dispose of records containing personal information by shredding, erasing or other methods. States such as Massachusetts include biometric information as a category of personal information subject to these requirements. Organizations should also implement a data retention schedule that ensures the destruction of biometric information, including voiceprints, once it is no longer needed as part of meaningful data destruction practices.
  • State Data Breach Notification Laws
    • All fifty U.S. states have data breach notification laws. In general, these laws require an entity that owns or licenses personal information about a state resident to report a data breach to individuals whose personal information is affected and, in some cases, the state attorney general or other agencies, the media, and credit reporting agencies. Each state has its own definition of personal information, and states such as California, Texas, Florida, and Arizona include health, medical, and/or biometric information. Unauthorized acquisition or access to such personal information, whether by hackers or employee error, can require notifications to individuals creating significant exposure and reputational harm to the organization. Perhaps a greater concern from such a compromise is the exfiltration of voiceprint data that could be used by hackers as credentials to access other user accounts, etc.
  • Vendor Contract Statutes
    • An increasing number of states including California, Massachusetts, New York, and Oregon statutorily require a business to conduct due diligence before sharing or disclosing certain categories of personal information to a third-party service provider, which likely include biometric information. Many of these statutes also require contractually obligating the vendor to maintain safeguards appropriate to the sensitivity of the data, which is a good practice even if a written agreement is not mandated by the statute.

Conclusion

Voice recognition technology is booming, and continues to infiltrate different facets of life that are hard to even contemplate. The technology brings innumerable potential benefits as well as significant data privacy and cybersecurity risks. Organizations that collect, use, and store voice data increasingly face compliance obligations as the law attempts to keep pace with technology, cybersecurity crimes and public awareness of data privacy and security. Creating a robust data protection program or regularly reviewing an existing one is a critical risk management and legal compliance step.

 

Facial-recognition tech creates service, security options | Hotel ManagementThe City of Portland, Oregon becomes the first city in the United States to ban the use of facial recognition technologies in the private sector citing, among other things, a lack of standards for the technology and wide ranges in accuracy and error rates that differ by race and gender. Failure to comply can be painful. Similar to the remedy available under the Illinois Biometric Information Privacy Act, fueling hundreds of class action lawsuits, the Ordinance provides persons injured by a material violation a cause of action for damages or $1,000 per day for each day of violation, whichever is greater. The Ordinance is effective January 1, 2021.

Facial recognition technology has become more popular in recent years, including during the COVID-19 pandemic. As the need arose to screen persons entering a facility for symptoms of the virus, including temperature, thermal cameras, kiosks, and other devices with embedded with facial recognition capabilities were put into use. However, many have objected to the use of this technology in its current form, citing problems with the accuracy of the technology, as summarized in a June 9, 2020 New York Times article, “A Case for Banning Facial Recognition.”

Under the Ordinance, a “private entity” shall not use “face recognition technologies” in “places of public accommodation” in the boundaries of the City of Portland. Facial recognition technologies under the Ordinance means

an automated or semi-automated process that assists in identifying, verifying, detecting, or characterizing facial features of an individual or capturing information about an individual based on an individual’s face

Places of public accommodation include any place or service offering to the public accommodations, advantages, facilities, or privileges whether in the nature of goods, services, lodgings, amusements, transportation or otherwise. This covers just about any private business and organization. Note, Portland also passed a separate ordinance prohibiting the use of facial recognition technology by the city government.

There are some exceptions, however. Places of public accommodation do not include “an institution, bona fide club, private residence, or place of accommodation that is in its nature distinctly private.” It is not clear from the Ordinance what it means to be “distinctly private.” Also, the Ordinance does not apply:

  • When facial recognition technologies are necessary to comply with federal state or local law,
  • For user verification purposes by an individual to access the individual’s own personal or employer issued communication and electronic devices, or
  • To automatic face detection services in social media applications.

So, in Portland, employees can still let their faces get them into their phones, including their company-provided devices. But, businesses in Portland should evaluate whether they are using facial recognition technologies, whether they fall into one of the exceptions in the ordinance, and if not what alternatives they have for verification, security, and other purposes for which the technology was implemented.

Despite several attempts, Congress has struggled to push forward a federal consumer privacy law over the past few years. But the COVID-19 pandemic, which has raised concerns regarding location monitoring, GPS tracking and use of health data, has heightened the urgency for federal consumer privacy legislation. In May, a group of Democrats from the U.S. Senate and House of Representatives introduced the Public Health Emergency Privacy Act (“the Act”), aimed to protect health information during the pandemic and regulate the use of that data with contact tracing technologies.

In late July, the Senate Committee of Appropriations introduced an Emergency Coronavirus Stimulus Package (“the Stimulus Package”) which would allocate $53 million of the $306 million package, to the Department of Homeland Security Cybersecurity and Infrastructure Security Agency for the protection of Coronavirus research data and related data. In addition, a group of 13 senators including Kamala Harris, D-California, Elizabeth Warren, D-Massachusetts, and Mark Warner, D-Virginia, sent a letter to Senate and Congressional leadership, asking for the Act to be included in the passage of the Stimulus Package.

“Health data is among the most sensitive data imaginable and even before this public health emergency, there has been increasing bipartisan concern with gaps in our nation’s health privacy laws,” the Senators stated in their letter.

“While a comprehensive update of health privacy protections is unrealistic at this time, targeted reforms to protect health data – particularly with clear evidence that a lack of privacy protections has inhibited public participation in screening activities – is both appropriate and necessary,” they added.

Under the Act, “Covered Organizations” is defined as “any person that collects, uses, or discloses  emergency health data electronically or  through communication by wire or radio; OR that develops or operates a website, web application, mobile application, mobile operating system feature, or smart device application for the purpose of tracking, screening, monitoring, contact tracing, or mitigation, or otherwise responding to the COVID–19 public health emergency.” NOTE:  Covered Organizations do not include: a health care provider; a person engaged in a de minimis collection or processing of emergency health data; a service provider; a person acting in their individual or household capacity; or a public health authority.

The Act would protect “emergency health data” which means “data linked or reasonably linkable to an individual or device, including data inferred or derived about the individual or device from other collected data provided such data is still linked or reasonably linkable to the individual or device, that concerns the public COVID–19 health emergency.” Examples of such data include:

  • information that reveals the past, present, or future physical or behavioral health or condition of, or provision of healthcare to, an individual, including data derived from testing an individual. This likely would include COVID-19 viral or serological test results, along with genetic data, biological samples, and biometrics;
  • other data collected in conjunction with other emergency health data or for the purpose of tracking, screening, monitoring, contact tracing, or mitigation, or otherwise responding to the COVID–19 public health emergency, such as (i) geolocation and similar information for determining the past or present precise physical location of an individual at a specific point in time, (ii) proximity data that identifies or estimates the past or present physical proximity of one individual or device to another, including information derived from Bluetooth, audio signatures, nearby wireless networks, and near-field communications; and (iii) any other data collected from a personal device.

Below are key requirements of the Act for Covered Organizations:

  • Only collect, use or disclose data that is necessary, proportionate and limited for a good-faith health purpose;
  • Take reasonable measures, where possible, to ensure the accuracy of data and provide a mechanism for individuals to correct inaccuracies;
  • Adopt reasonable safeguards to prevent unlawful discrimination on the basis of emergency health data;
  • Only disclose data to a government entity if it is to a public health authority and is solely for good faith public health purposes;
  • Establish and implement reasonable data security policies, practices and procedures;
  • Obtain affirmative express consent before collecting, using or disclosing emergency health data, and provide individuals with an effective mechanism to revoke that consent. NOTE: There are limited exceptions where consent is not required including to protect from fraud/malicious activity, to prevent a security incident, or if otherwise required by law;
  • Provide notice in the form of a privacy policy prior to collection that describes how and for what purposes the data will be used (including categories of recipients), the organization’s data security policies and practices, and how individuals may exercise their rights.

If enacted, the Federal Trade Commission (FTC) would be required to promulgate rules regarding data collection, use and disclosure under the Act. In addition, both the FTC and state attorneys general would have enforcement authority over the Act.

The Act, if passed, would be a temporary measure that would terminate once COVID-19 was no longer deemed a public emergency. Covered organizations would be required to not use or maintain emergency health data 60 days after the termination of the public health emergency, and destroy or render not linkable such data.

With no comprehensive Federal privacy framework in place, the Senators are urging Congressional leadership to allow for a measure that provides “Americans with assurance that their sensitive health data will not be misused will give Americans more confidence to participate in COVID screening efforts, strengthening our common mission in containing and eradicating COVID-19”.

We will continue to update on the status of the Act and other related developments.

As we recently reported, the privacy-right activist group that sponsored the California Consumer Privacy Act (“CCPA”) – Californians for Consumer Privacy – is pushing for an even more stringent privacy bill, the California Privacy Rights Act (“CPRA”). The CRPA has now qualified for the November 3, 2020 ballot, gathering more than 600,000 valid signatures as required, according to the memorandum circulated by the California Secretary of State. If California voters approve the initiative in November, the CPRA would significantly expand the rights of Californians under the current California Consumer Privacy Act (“CCPA”) starting on January 1, 2023, with certain provisions going into effect immediately.

What are some of the key provision of the CPRA?

  • Establish the California Privacy Protection Agency (“CPPA”): – CPRA would establish the first agency of its kind in the United States. The Agency will be governed by a five-member board, including the Chair, and will have full administrative power, authority and jurisdiction to implement and enforce the CCPA, instead of the California Attorney General.
  • “Sensitive Personal Information” vs. “Personal Information”: – CPRA defines “sensitive personal information” stricter than personal information. The definition is broad, but it includes government-issued identifiers (i.e. SSN, Driver’s License, Passport), account credentials, financial information, precise geolocation, race or ethnic origin, religious beliefs, contents of certain types of messages (i.e. mail, e-mail, text), genetic data, biometric information, and others.

The CPRA creates new obligations for companies and organizations processing sensitive personal information. It would also allow consumers to limit the use and disclosure of their sensitive personal information.

  • Additional Consumer Rights: – In addition to the rights under CCPA, consumers will have additional rights under the CPRA, including, a) right to correct personal information, b) right to know length of data retention, c) right to opt-out of advertisers using precise geolocation, and d) right to restrict usage of sensitive personal information.
  • Employee Data: Expanded Moratorium from until January 1, 2023: In general, most of the provisions of the CCPA does not cover employee data until at least January 1, 2021. CPRA will expand that moratorium until at least January 1, 2023.
  • Expanded Breach Liability: In addition to the CCPA’s private right of action for breaches of nonencrypted, nonredacted personal information, the CPRA would expand that to the unauthorized access or disclosure of an email address and password or security question that would permit access to an account if the business failed to maintain reasonable security.

The CCPA has not even celebrated its anniversary nor started its enforcement (July 1, 2020), and companies doing business in California will soon have to grapple with the nuances brought by the CPRA. Jackson Lewis will continue to monitor any developments with the CPRA as it marches to the ballots come November 2020.

 

 

As the COVID-19 pandemic presses on, privacy and security matters continue to be at the forefront for federal and state legislature. We recently reported that Washington D.C. updated its data breach notification law. Now, the Vermont legislature also amended its data breach notification law, with significant overhauls including expansion of its definition of personal information, and the narrowing of permissible circumstances under which substitute notice may be applied. Bill S.110 amending Vermont’s Security Breach Notice Act, V.S.A §§ 2330 & 2335, b23-0215, was signed into law by Governor Phil Scott, and will take effect July 1, 2020.  In addition Bill S.110, creates a new duties and prohibitions with respect to student privacy directed towards educational technology services (similar to a law first enacted in California, and later adopted by over 20 states).

Key updates to Vermont’s Security Breach Notice Act include:

  • Expansion of Personally Identifiable Information (PII)

Following many other states, the new law will add to the data elements that if breached could trigger a notification obligation.  Prior to this amendment, the definition of PII in Vermont was limited to four basic data elements that when unencrypted, a consumer’s first name or first initial and last name in combination with:

    • Social Security number;
    • Driver license or nondriver identification card number; • Financial account number or credit or debit card number, if circumstances exist in which the number could be used without additional identifying information, access codes, or passwords; or
    • Account Passwords, personal identification numbers, or other access codes for a financial account.

The amended law includes these elements, and adds the following when combined with a consumer’s first name or first initial and last name:

    • Individual taxpayer identification number, passport number, military identification card number, or other identification number that originates from a government identification document that is commonly used to verify identity for a commercial transaction;
    • Unique biometric data generated from measurements or technical analysis of human body characteristics used by the owner or licensee of the data to identify or authenticate the consumer, such as a fingerprint, retina or iris image, or other unique physical representation or digital representation of biometric data;
    • Genetic information; and
    • Health records or records of a wellness program or similar program of health promotion or disease prevention; a health care professional’s medical diagnosis or treatment of the consumer; or a health insurance policy number.

The amended law will also include notification requirements for breaches of “login credentials”. The amendment defines “login credentials” as “a consumer’s user name or e-mail address, in combination with a password or an answer to a security question, that together permit access to an online account.” If a breach is limited to “login credentials” (and no other PII), the data collector is only required to notify the Attorney General or Department of Finance, as applicable, if the login credentials were acquired directly from the data collector or its agent.

  • Substitute Notice

Previously, substitute notice was permitted where the cost of Direct Notice via writing or telephone would exceed $5,000, more than 5,000 consumers would be receiving notice, or the data collector does not have sufficient contact information.

Under the amended law, substitute notice is only permitted where the lowest cost of providing Direct Notice via writing, email, or telephone would exceed $10,000, or the data collector does not have sufficient contact information. It is no longer permitted to provide substitute notice where the number of consumers exceed a certain threshold.

Student Privacy Law 

Finally, Bill S.110 also includes the Student Online Personal Information Protection Act, which prohibits an “operator” from sharing student data and using that data for targeted advertising on students for a non-educational purpose. Under the new law, “operator” means the operator of an Internet website, online service, online application, or mobile application used primarily for K-12 purposes, and designed and marketed as such. The passage of this law is particularly relevant during the COVID-19 pandemic, as student use of education technology services has dramatically increased.

Conclusion

This amendment keeps Vermont in line with other states across the nation currently enhancing their data breach notification laws in light of recent large-scale data breaches and heightened public awareness.  Organizations across the United States should be evaluating and enhancing their data breach prevention and response capabilities.

 

In the midst of COVID-19 challenges, privacy and security matters continue to be at the forefront for federal and state legislature. In late March, the Washington D.C. (“D.C.”) legislature amended its data breach notification law, with significant overhauls including expansion of its definition of personal information, updates to notification requirements and new credit monitoring obligations. The Security Breach Protection Amendment Act of 2019, b23-0215, passed the 12-member D.C. Council unanimously and was signed by D.C. Mayor Muriel Bowser on March 26. The new law became effective on May 19, 2020.

Key updates to D.C.’s new law include:

  • Expansion of personal information

Following many other states, the new law will add to the data elements that if breached could trigger a notification obligation.  Currently, personal information is defined as (1) any number or code or combination of numbers or codes, such as account number, security code, access code, or password, that allows access to or use of an individual’s financial or credit account, (2) or an individual’s first name or first initial and last name, or phone number, or address, and any one or more of the following data elements: Social Security Number; Driver license number or DC identification card number; or Credit card number or debit card number.

The amendment significantly expands the definition of personal information to include the following new data elements:

  • Identifiers including taxpayer identification number, passport number, military identification number and other unique identification numbers issued on a government document;
  • medical information;
  • genetic information and DNA profile;
  • health insurance information, including a policy number, subscriber information number, or any unique identifier used by a health insurer that permits access to an individual’s health and billing information;
  • biometric data; and
  • any combination of data elements listed above, that would enable a person to commit identity theft without reference to the individual’s name.

Personal information also includes “a user name or email address in combination with a password, security question and answer, or other means of authentication, or any combination of data elements [listed above] that permits access an individual’s email account.”

  • Notification to Attorney General

Notification to the Office of the Attorney General is now required for any breach affecting 50 or more D.C. residents. Notice must be provided in the “most expedient manner possible, without unreasonable delay, but in no event later than when notice is provided”. There are also several specific content requirements for notice to the Attorney General, including whether there is knowledge of any foreign country involvement.

  • GLBA/HIPAA Exemption

The new law exempts entities subject to GLBA or HIPAA if those entities maintain breach notification procedures and provide notification as required under those law, as applicable. However those entities must still notify the Attorney General of any breach that requires notification by GLBA or HIPAA.

  • Risk of Harm Threshold

If a person or entity reasonable determines, after reasonable investigation and consultation with the Office of the Attorney General and federal law enforcement agencies, that the breach likely will not result in harm to affected individuals, notice is not required.

  • Free Mitigation Services for Affected Residents

D.C. joins California, Connecticut, Delaware and Massachusetts in requiring companies to provide identity theft protection or credit monitoring services to residents affected by a breach at no cost. The new D.C. law requires that a person or entity that experiences a breach that includes Social Security numbers and/or taxpayer identification numbers, must offer affected individuals at least 18 months of identity theft protection services at no cost.

Data Security Requirements

Finally, the new law, notably, establishes data security requirements for covered businesses. In short, any business that owns, licenses, maintains, handles or otherwise possesses personal information of D.C. residents must implement and maintain reasonable security safeguards, including procedures and practices that are appropriate to the nature of the personal information and nature and size of the entity of the operation. Further, covered entities must enter written agreements with their third party service providers requiring the service provider to implement and maintain similar security procedures and practices.

This amendment keeps Washington D.C. in line with other states across the nation currently enhancing their data breach notification laws in light of recent large-scale data breaches and heightened public awareness.  Organizations across the United States should be evaluating and enhancing their data breach prevention and response capabilities.

As organizations work feverishly to return to business in many areas of the country, they are mobilizing to meet the myriad of challenges for providing safe environments for their workers, customers, students, patients, and visitors. Chief among these challenges are screening for COVID19 symptoms, observing social distancing, contact tracing, and wearing masks. Fortunately, innovators are rising to meet this need, developing a range of technologies – wearables, apps, devices, kiosks, AI, etc. – all designed to support these efforts. But, for many organizations, the question is what technologies are out there and what should they be thinking about in deciding to adopt one or more of them.

Wading through the wide variety of COVID19-related technologies can be like scrolling through your cable provider’s movie guide – lots of time spent, not sure what to choose. So, to help you get a quick, bird’s eye view of some of the kinds of technologies being developed and which may be available, please see our table of “Selected COVID19 Distancing, Screening, Contact Tracing, and Other Technologies” (Table)*

Needless to say, compiling, implementing, enforcing, and documenting extensive and sometimes conflicting federal, state, and local mandates and recommendations for screening, distancing, contact tracing, and mask wearing requires a significant and on-going effort. Technologies, such as those listed in the Table, can help.  Some of the features of these technologies include:

  • Wearables that alert the wearer that he or she is getting too close to a colleague may boost an organization’s efforts to adhere to distancing requirements.
  • Kiosks with thermal scanning capabilities may facilitate temperature screening in a faster more efficient way while minimizing contact that might further spread of COVID19.
  • Apps that track the locations of individuals could automate otherwise laborious manual contact tracing activities.

The advantages of these technologies can be substantial, quickening the path to compliance and opening the organization’s doors to business. However, organizations should proceed carefully to examine not only whether the particular solution will have the desired effect, but whether it can be implemented in a compliant manner with minimal legal risk. Below are some questions organizations should be considering:

  • What is the organization’s goal for the technology? If the goals of the organization is keep workers who may have COVID19 from entering its facility, then screening technologies are something the organization may consider.  However, if the goal is the identify other workers who may have been exposed to a COVID19 positive co-worker, the contact tracing technologies may be more appropriate.  To this end, it is important to consider the organizations goals prior to selecting technologies for implementation.
  • Does the technology work? For temperature taking/scanning technology, this may mean validation of the accuracy of the device.  When looking at contact tracing, accuracy will similarly be key in your efforts to identify co-workers who may be potentially impacted by COVID19.
  • Will the technology require employees to incur expenses that must be reimbursed? In some states, the implementation of this technology may require reimbursement if workers must incur costs or expenses as part of the implementation. For example, if an app requires an employee to have a mobile device for work purposes, expense reimbursement obligations with respect to that device may exist.
  • Is bargaining with the union required? As organizations look to these technologies, there may be numerous instances where the organization will need to consult, and possibly engage in bargaining with, the applicable union(s).  Depending on which technology is being contemplated may dictate whether the organization’s efforts are supported or challenged.
  • Is notice/consent required? This may be a difficult question to answer without having an understanding of the data that the technology is collecting. For example, collecting the geolocation of employees as well as their COVID status, and interactions with others all are likely elements of personal information under the California Consumer Privacy Act (CCPA) which applies to employees that reside in California if the organization is subject to the law.   Similarly, electronic tracking of workers or the collection of worker’s biometric information (facial scans, etc.) may require notice and/or consent depending on the state of implementation.  If the technology requires access to an employee’s personally-owned device, notice and consent are likely required, but most certainly a best-practice.  While many think HIPAA is implicated in the collection of workers’ temperature or responses to screening questions, this is often not the case unless a third-party provider or lab (i.e., a covered entity) is performing the screening, in which case an authorization is needed to share the results with the employer.
  • Will workers participate? Determining whether technology implementation may require notice or consent is discussed above.  However, if implementation and/or usage is voluntary the effectiveness of the technology in meeting the organizations goals may be substantially impacted. Regardless of whether implementation is voluntary or required, it is important for organizations to communicate with their workers to explain the goals of the technology, answer questions regarding same, and address concerns over privacy and relates issues in order to ensure buy-in and effectiveness.
  • How is data collected, shared, secured, returned? Understanding the answers to these questions are imperative in order to help ensure compliance. This is especially true as there are numerous laws which may be implicated when data is collected from workers.  These include the Americans with Disabilities Act (ADA), the Genetic Information Nondiscrimination Act (GINA), state laws, CCPA, and the General Data Protection Regulation (GDPR).  In addition to statutory or regulatory mandates, organization will also need to consider existing contracts or services agreements which may provide for or limit the collection, sharing, storage, or return of data.  Finally, whether mandated by law or contract, organizations should still consider best practices to help ensure the privacy and security of the data it is responsible for.
  • Are employees implementing the technology capable, trained? Should “managers” be viewing dashboards which provide extensive information about many of the organization’s workers? In these uncertain times an organization may be left with no choice other than to expand the list of individuals who may have access to workers’ personal information. However, when doing so organizations still need to be mindful of the ADA’s confidentiality requirements, discrimination, as well as state laws protecting against discrimination for lawful off-duty conduct (that may be discovered during the monitoring process). Addressing privacy and security obligations through a confidentiality agreement may be one way to help address these concerns.
  • What is the relationship with the vendor? The organization’s relationship with the vendor is established way of contract or service agreement. It is important for these contracts/agreements to include confidentiality, data security, and similar provisions.   This is most important if the vendor will be maintaining, storing, accessing, or utilizing the information collected about the organization’s workers.
  • When should we stop using the technology? The Equal Employment Opportunity Commission (EEOC) has said that currently COVID19 meets the ADA’s direct threat standard and thus organizations may screen, take the temperatures of, and test workers prior to permitting those workers onsite. The EEOC has not yet expressly addressed contact tracing.  As organizations look to the future, and the hopeful end to the COVID19 pandemic, they will need to consider when the state of the pandemic no longer supports the use of these technologies.  The EEOC may provide that guidance, however, organizations may still have reasons to continue utilizing some of these technologies.  For example, contract tracing may continue to help slow/limit spread within an organization.  Similarly, organizations may face contractual demands from customers or clients who are looking to limit future risks or outbreaks related to COVID19.  At points during this process, organizations also will need to consider whether and how long to retain the data collected.

In short, in 2020 we have extensive technology at our disposal and/or in development which may play a crucial role in helping organizations address COVID19, ensuring a safe and health workplace and workforce, and preventing future pandemics.  Nevertheless, organizations must consider the legal risks, challenges, and requirements with any such technology prior to implementation.

 

*As noted, the Table is for general information purposes only. We have sampled none of these products or services. Neither the selection of these products and services nor the exclusion of others is in any way intended as an endorsement of, or opposition to, any type of product, service, application, or any manufacturer. The listing is intended solely to provide readers with a general, high-level overview of the kinds of products being developed to address certain aspects of COVID19 remediation. This is by no means an exhaustive list. All readers must carefully evaluate their own specific needs for COVID19 mitigation and compliance, review the specific features and specifications of any technology being considered, configure and install same with qualified information systems specialists, and obtain experienced and informed legal counsel concerning the applicable legal and compliance requirements concerning the selection and implementation of any technology solution.  

Over the past few months, businesses across the country have been focused on the California Consumer Privacy Act (CCPA) which dramatically expands privacy rights for California residents and provides a strong incentive for businesses to implement reasonable safeguards to protect personal information. That focus is turning back east as the Stop Hacks and Improve Electronic Data Security Act (SHIELD Act), becomes effective in less than two weeks. With the goal of strengthening protection for New York residents against data breaches affecting their private information, the SHIELD Act imposes more expansive data security and updates its existing data breach notification requirements.

This post highlights some features of the SHIELD Act. Given the complexities involved, organizations would be well-served to address their particular situations with experienced counsel.

When does the SHIELD Act become effective?

The SHIELD Act has two effective dates:

  • October 23, 2019 – Changes to the existing breach notification rules
  • March 21, 2020 – Data security requirements

Which businesses are covered by the SHIELD Act?

The SHIELD Act’s obligations apply to “[a]ny person or business which owns or licenses computerized data which includes private information” of a resident of New York. Previously, the obligation to provide notification of a data breach under New York’s breach notification law applied only to persons or businesses that conducted business in New York.

Are there any exceptions for small businesses?

As before the SHIELD Act, there are no exceptions for small businesses in the breach notification rule. A small business that experiences a data breach affecting the private information of New York residents must notify the affected persons. The same is true for persons or businesses that maintain (but do not own) computerized data that includes private information of New York residents. Persons or businesses that experience a breach affecting that information must notify the information’s owner or licensee.

However, the SHIELD Act’s data security obligations include some relief for small businesses, defined as any person or business with: Continue Reading New York SHIELD Act FAQs