Yesterday, Baltimore’s local ordinance prohibiting persons from “obtaining, retaining, accessing, or using certain face surveillance technology or any information obtained from certain face surveillance technology,” became effective.  The new ordinance prohibits the use of facial recognition technology by city residents, businesses, and most of the city government (excluding the city police department) until December 2022. Baltimore joins a growing list of localities regulating private use of facial recognition technology including Portland (Oregon), and New York City.

Specifically, the Baltimore ordinance prohibits an individual or entity from obtaining, retaining, or using facial surveillance system or any information obtained from a facial surveillance system within the boundaries of Baltimore city. “Facial surveillance system” is defined as any computer software or application that performs face surveillance. Notably, the Baltimore ordinance explicitly excluded from the definition of “facial surveillance system” a biometric security system designed specifically to protect against unauthorized access to a particular location or an electronic device, meaning organizations using a biometric security system for employee/visitor access to their facilities would appear to be still be permissible under the bill. The ordinance also excludes from its definition of “facial surveillance system” the Maryland Image Repository System (MIRS) used by the Baltimore City Police in criminal investigations.

Significantly, a person in violation of the law is subject to fine of not more than $1,000, imprisonment of not more than 12 months, or both fine and imprisonment.  Each day that a violation continues is considered a separate offense. The criminalization of use of facial recognition, is first of its kind across the United States.

Businesses in the City of Baltimore should be evaluating whether they are using facial recognition technologies, whether they fall into one of the exceptions in the ordinance, and if not what alternatives they have for verification, security, and other purposes for which the technology was implemented. An earlier post providing details and analysis of the Baltimore prohibition on face surveillance technology is available here.

URL

Facial recognition technology has become increasingly popular in recent years in the employment and consumer space (e.g. employee access, passport check-in systems, payments on smartphones), and in particular during the COVID-19 pandemic. As the need arose to screen persons entering a facility for symptoms of the virus, including temperature, thermal cameras, kiosks, and other devices with embedded with facial recognition capabilities were put into use. However, many have objected to the use of this technology in its current form, citing problems with the accuracy of the technology, and now, more alarmingly, there is growing concern that “Faces are the Next Target for Fraudsters” as summarized by a recently article in the Wall Street Journal (“WSJ”).

In the last year, there has been an uptick in hackers trying to “trick” facial recognition technology, in a myriad of settings, such as fraudulently claiming unemployment benefits from state workforce agencies, The majority of states are now using facial recognition technology as a way to verify to eligible citizens, ironically enough, in order to prevent other types of fraud. As discussed in the WSJ article, the firm ID.me.Inc. which provides facial recognition software for 26 states to help verify individuals eligible for unemployment benefits has seen between June 2020 – January 2021 over 80,000 attempts to fool government identification facial recognition systems.  Hackers of facial recognition systems use a myriad of techniques including deepfakes (AI generated images), special masks, or even holding up images or videos of the individual the hacker is looking to impersonate.

Fraud is not the only concern with facial recognition technology.  Despite its appeal for employers and organizations, there are concerns regarding the accuracy of the technology, as well as significant legal implications to consider.  First, there are growing concerns regarding accuracy and biases of the technology.  A recent report by the National Institute of Standards and Technology studied 189 facial recognition algorithms which is considered the “majority of the industry”.  The report found that most of the algorithms exhibit bias, falsely identifying Asian and Black faces 10 to beyond 100 times more than White faces.  Moreover, false positives are significantly more common in woman than men, and more elevated in elderly and children, than middle-aged adults.

In addition, several U.S. localities have already banned the use of facial recognition for law enforcement, other government agencies, and/or private and commercial use.  The City of Baltimore, for example, recently banned the use of facial recognition technologies by city residents, businesses, and most of the city government (excluding the city police department) until December 2022.  Council Bill 21-0001  prohibits persons from “obtaining, retaining, accessing, or using certain face surveillance technology or any information obtained from certain face surveillance technology.” Likewise in September of 2020 the City of Portland in Oregon became the first city in the United States to ban the use of facial recognition technologies in the private sector citing, among other things, a lack of standards for the technology and wide ranges in accuracy and error rates that differ by race and gender. Failure to comply can be painful. The Ordinance provides persons injured by a material violation a cause of action for damages or $1,000 per day for each day of violation, whichever is greater.

And finally, companies looking to implement facial recognition technologies, must consider their obligations under laws such as the Illinois’ Biometric Information Privacy Act (BIPA) and the California Consumer Privacy Act (CCPA). The BIPA addresses a business’s collection of biometric data from both customers and employees including for example facial recognition, finger prints, and voice prints.  The BIPA requires informed consent prior to collection of biometric data, mandates protection obligations and retention guidelines, and creates a private right of action for individuals aggrieved by BIPA violations which has resulted in a flood of BIPA class action litigation in recent years.  Texas, Washington and California also have similar requirements, New York is considering a BIPA-like privacy bill and NYC recently created BIPA-like requirements for retail, hospitality businesses concerning biometric collection from customers. Additionally, states are increasingly amending their breach notification laws to add biometric information to the categories of personal information that require notification, including 2020 amendments in California, D.C., and Vermont. Moreover, there are a myriad of data destruction, reasonable safeguards, and vendor requirements to consider, depending on the state, when collecting biometric data.

Takeaway

Facial recognition and other biometric data related technology is booming, and continues to infiltrate different facets of life that are hard to even contemplate. The technology brings innumerable potential benefits as well as significant data privacy and cybersecurity risks. Organizations that collect, use, and store biometric data increasingly face compliance obligations as the law attempts to keep pace with technology, cybersecurity crimes, and public awareness of data privacy and security. Creating a robust privacy and data protection program or regularly reviewing an existing one is a critical risk management and legal compliance step.

Colorado is officially the third U.S. state to enact comprehensive privacy legislation, following California and Virginia. The Colorado General Assembly passed the Colorado Privacy Act (CPA), Senate Bill 21-109, on June 8, 2021, and Governor Jared Polis signed it into law on July 7, 2021.

The Colorado Privacy Act takes effect July 1, 2023, six months after the Virginia Consumer Data Protection Act (VCDPA) and California Privacy Rights Act (CPRA).

Applicability

The CPA provides new obligations on Controllers—that is, any entity that (i) determines the purposes and means of processing personal data, (ii) conducts business in Colorado or produces or delivers commercial products or services intentionally targeted to residents of the state, and (iii) either:  (a) controls or processes the personal data of more than 100,000 Colorado residents per year or (b) derives revenue from selling the personal data of more than 25,000 Colorado residents.

It also provides new rights to Consumers—or, any individual who is a Colorado resident acting in an individual or household context.

The CPA does not apply to data that is subject to other federal privacy laws such as the Health Insurance Portability and Accountability Act (HIPAA), the Children’s Online Privacy Protection Act (COPPA), the Gramm-Leach-Bliley Act (GLBA), the Family Educational Rights and Privacy Act (FERPA), and the Securities Exchange Act of 1934. The CPA also exempts employment data, higher education institutions, nonprofits, state and local governments, and public utility customer records (so long as they are not sold).

Consumer Rights under the Colorado Privacy Act

The rights the CPA affords to Consumers are similar to those in the VCDPA and CCPA/CPRA.

In broad strokes, the CPA regulates the use of and disclosures surrounding “personal data,” which includes information that is linked, or reasonably linkable, to an identifiable person, and “sensitive data,” which includes data revealing racial or ethnic origin, religious beliefs, a mental or physical health condition, sexual orientation, citizenship, genetic or biometric data, or personal data from a known child.

The CPA empowers Consumers with new controls over their data, including the right to:

  1. opt out of the processing of certain personal data;
  2. access personal data (up to twice per calendar year);
  3. correct inaccurate data;
  4. delete personal data; and
  5. data portability.

Controller Duties under the Colorado Privacy Act

Similarly, the CPA creates duties for Controllers, including the:

  • Duty of transparency;
  • Duty of purpose specification;
  • Duty of data minimization;
  • Duty to avoid secondary use;
  • Duty to avoid unlawful discrimination; and
  • Duty regarding sensitive data.

In addition, while Consumers may request access to their personal data, Controllers may not require that a Consumer create a new account in order to exercise this right (or retaliate with increased cost or decreased availability of a product or service ).  When responding to Consumer data requests, Controllers must:

  • Take action on the Consumer’s request without undue delay and within 45 days of receiving the request—with few exceptions.
  • Develop an internal process for Consumers to appeal refusals of data requests.
  • Notify the Consumer that it may contact the Colorado Attorney General if the Consumer has concerns about the result of the response and outcome of appeal.

Controllers must also conduct data protection assessments for each processing activity involving a heightened risk of harm to Consumers, including:

  • The sale of personal data;
  • Processing of sensitive data; or
  • Processing personal data for targeted advertising if it could lead to unfair or deceptive treatment or have a disparate impact on Consumers, financial or physical injury, physical or other intrusion upon seclusion, or other substantial injury

Controllers must present these data protection assessments to the CO Attorney General upon request.

Enforcement

One key difference between the CPA and California and Virginia privacy laws is that the CPA is enforceable by both the district attorney and office of the attorney general. This broadened enforcement mechanism could lead to greater scrutiny of affected businesses.

Unlike the CCPA, the CPA does not include a private right of action. The attorney general or district attorney may, however, institute a civil action or pursue injunctive relief. Failure to comply with the CPA may be considered a deceptive trade practice. Financial penalties are left to the discretion of the courts.

Key Takeaways

Colorado may be only the third state to enact comprehensive privacy legislation, but other states will likely be soon to follow. Differences between the CPA, VCDPA, and CPRA are subtle, and there are plenty of technical details to sift through. While this may ease the burden of compliance, companies still need to ensure their data collection activities fully comply with the provisions of each privacy act.

And with more states likely to follow suit, data privacy compliance will only get more complicated.

Please contact a Jackson Lewis attorney with any questions.

* Jackson Biesecker is a law clerk in our Privacy, Data & Cybersecurity Practice Group that contributed substantially to this article.

 

 

The Baltimore City Council recently passed an ordinance, in a vote of 13-2, barring the use of facial recognition technology by city residents, businesses, and most of the city government (excluding the city police department) until December 2022.  Council Bill 21-0001  prohibits persons from “obtaining, retaining, accessing, or using certain face surveillance technology or any information obtained from certain face surveillance technology.”

Facial recognition technology has become more popular in recent years, including during the COVID-19 pandemic. As the need arose to screen persons entering a facility for symptoms of the virus, including temperature, thermal cameras, kiosks, and other devices embedded with facial recognition capabilities were put into use, often inadvertently. However, many have objected to the use of this technology in its current form, citing problems with the accuracy of the technology, as summarized in a June 9, 2020 New York Times article, “A Case for Banning Facial Recognition.”

While many localities across the nation have barred the use of facial recognition systems by city police, and other government agencies, such as San Francisco and Oakland, Baltimore is only the second city (following Portland, Oregon), to ban biometric technology use by private residents and businesses. Effective January 1, 2021 the City of Portland banned the use of facial recognition by private entities in any “places of public accommodation” within the boundaries of the city. “Places of public accommodation was broadly defined to include any “place or service offering to the public accommodations, advantages, facilities, or privileges whether in the nature of goods, services, lodgings, amusements, transportation or otherwise.”

Specifically, the Baltimore ordinance prohibits an individual or entity from obtaining, retaining, or using facial surveillance system or any information obtained from a facial surveillance system within the boundaries of Baltimore city. “Facial surveillance system” is defined as any computer software or application that performs face surveillance. Notably, the Baltimore ordinance explicitly excluded from the definition of “facial surveillance system” a biometric security system designed specifically to protect against unauthorized access to a particular location or an electronic device, meaning employers using a biometric security system for employee/visitor access to their facilities would appear to be still be permissible under the bill. The ordinance also excludes from its definition of “facial surveillance system” the Maryland Image Repository System (MIRS) used by the Baltimore City Police in criminal investigations.

A person in violation of the law is subject to fine of not more than $1,000, imprisonment of not more than 12 months, or both fine and imprisonment.  Each day that a violation continues is considered a separate offense. The criminalization of use of facial recognition, is first of its kind across the United States.

The Baltimore bill also includes a separate section applicable only to the Mayor and City Council of Baltimore City, requiring an annual surveillance report by the Director of Baltimore City Information and Technology or any successor entity, in consultation with the Department of Finance to be submitted to the Mayor of Baltimore detailing: 1) each purchase of surveillance technology during the prior fiscal year, disaggregated by the purchasing agency, and 2) an explanation of the use of the surveillance technology.  In addition, the report must be posted to the Baltimore City Information and Technology website. Examples of surveillance technology that must be included in the report include: automatic license plate readers, x-ray vans, mobile DNA capture technology and software designed to forecast criminal activity or criminality.

It is important to note, that the bill’s provisions are set to automatically expire December 31, 2022 unless the City Council, after appropriate study, including public hearings and testimonial evidence concludes that such prohibitions and requirements are in the public interest, in which case the law will be extended for an additional 5 years.

The Baltimore ordinance has been met with significant opposition by industry experts, particularly as the ordinance would be the first in the U.S. to criminalize private use of biometric technologies. In a joint letter, the Security Industry Association (SIA), the Consumer Technology Associations (CTA) and the Information Technology and Innovation Foundation (ITIF) and XR Association to reject the enactment of the Baltimore ordinance on grounds that it is overly broad and prohibits commercial applications of facial recognition technology that already have widespread public acceptance and provide “beneficial and noncontroversial” services, including for example: increased and customized accessibility for disabled persons, healthcare facilities to verify patient identities while reducing the need for close-proximity interpersonal interactions, banks to enhance consumer security to verify purchases and ATM access, and many more. A similar concern was voiced by Councilmember Issac Schliefer who was one of the two votes opposing the ordinance.

The ordinance now awaits signage by Baltimore Mayor Brandon Scott, and if signed, will become effective 30 days after enactment. In anticipation, of the ordinance’s potential enactment, businesses in the City of Baltimore should begin evaluating whether they are using facial recognition technologies, whether they fall into one of the exceptions in the ordinance, and if not what alternatives they have for verification, security, and other purposes for which the technology was implemented.

UPDATE: On June 16, Gov. Ned Lamont signed HB 5310 into law which becomes effective October 1, 2021.

State legislatures across the nation are prioritizing privacy and security matters, and Connecticut is no exception. This week, Connecticut Attorney General William Tong announced the passage of An Act Concerning Data Privacy Breaches, a measure that will enhance and strengthen Connecticut’s data breach notification law. The Connecticut House of Representatives unanimously approved the bill on May 27th, and Senate followed with unanimous approval shortly after.  The bill now heads to Governor Ned Lamont for signage.

Connecticut has led the nation in data privacy for over a decade, and this legislation ensures that we will continue to do so. Since we passed one of our nation’s first laws protecting consumers from online data breaches, technology and risks have evolved. This legislation ensures that our laws reflect those evolving risks and continue to offer strong, comprehensive protection for Connecticut residents,

Attorney General Tong observed in his announcement of the data breach notification bill.

Key aspects of Connecticut’s enhanced data breach notification law include:

  • Expansion of the definition of “personal information.

Originally, Connecticut defined “personal information” as an individual’s first name or first initial and last name in combination with any one, or more, of the following data:

    • Social security number
    • Driver’s license number
    • State identification card number
    • Credit or debit card number
    • Financial account number in combination with any required security code, access code, or password that would permit access to such financial account.

The new law if enacted will look more like similar laws in California and Florida by including additional data categories:

    • Individual taxpayer identification number
    • Identity protection personal identification number issued by the IRS
    • Passport number, military identification number or other identification number issued by the government that is used to verify identity
    • Medical information regarding an individual’s medical history, mental or physical condition or medical treatment or diagnosis by a healthcare professional
    • Health insurance policy number or subscriber identification number, or any unique identifier by a health insurer to identify the individual
    • Biometric information consisting of data generated by electronic measurements of an individual’s unique physical characteristics and used to authenticate or ascertain the individual’s identity, such as a fingerprint, voice print, retina or iris image; and
    • User name or electronic mail address, in combination with a password or security question and answer that would permit access to an online account.
  • Notification Time and Content.

The new law would shorten the time a business has to notify affected Connecticut residents and the Office of the Attorney General of a data breach time from 90 days to 60 days. Remember, as with most other breach notification mandates, the timing requirement is “without unreasonable delay but not later than 60 days” in this case. In addition, if identification of a resident of the state whose personal information was breached or reasonably believed to have been breached will not be completed within 60 days, the business must provide preliminary substitute notice as outlined by the law, and proceed in good faith to work to identify affected residents and provide direct notice as expediently as possible. Incident response plans would need to be reviewed to ensure this requirement is incorporated.

  • Breach of Login Credential. 

The new law would add a section addressing unique notification requirements in the case of a breach of login credentials. In such a case, notice to an affected resident may be provided in electronic or other form that directs the resident to promptly change any password or security questions and answers, or to take other appropriate steps to protect the affected online account, or any account with the same login credentials.

  • HIPAA and HITECH Act Exception.

Any person subject to and in compliance with HIPAA and/or the HITECH Act privacy and security obligations is deemed in compliance of the new law with a couple of critical exceptions. First, as under New York’s SHIELD Act, a person subject to HIPAA or HITECH that is required to notify Connecticut residents of a data breach under HITECH still must notify Connecticut’s Attorney General at the same time residents are notified. Second,  if the person would have been required to provide identity theft prevention and/or mitigation services under Connecticut law, which is for a period of 24 months, that requirement remains.

  • Investigation Materials.

Under the new law, documents, material and information connected to the investigation of a breach of security would be exempt from public disclosure, unless required to be made available to third parties by the Attorney General in furtherance of the investigation.

This new law, if signed keeps Connecticut in line with other states across the nation currently enhancing their data breach notification laws in light of recent large-scale data breaches and heightened public awareness.  Organizations across the United States should be evaluating and enhancing their data breach prevention and response capabilities.

Below are several resources for understanding current trends in the state data breach notification law landscape:

On May 13th, New York State Senator Kevin Thomas, Chair of NY’s Consumer Protection Committee, reintroduced the New York Privacy Act (“NYPA”), a comprehensive consumer privacy law similar in kind to the California Consumer Privacy Act (“CCPA”), California Privacy Rights Act (“CPRA”), and Virginia’s Consumer Data Protection Act (“CDPA”).  The NYPA had been introduced in a previous legislative session back in 2019, but failed to move forward in the legislative process.

This version of the NYPA is in some respects less ambitious than the prior version.  For example, the latest version removed the bill’s broad application to any “legal entities that conduct business in New York” or that produce products or services that “intentionally target” New York residents, which would have meant that small-to-medium size businesses, and potentially even not-for-profits, would have been subject to the law. Nevertheless the NYPA surpasses the CCPA and CDPA in some important respects, including by requiring data controllers to:

  • collect opt-in consent from consumers before processing their personal data for any purpose;
  • provide detailed disclosures about the activities of outside parties to whom they disclose personal data;
  • respond to consumer requests to correct personal data; and
  • make disclosures about their automated decision-making activities, afford consumers the opportunity to challenge automated decisions, and conduct and publish assessments on the impacts of their automated decision-making processes.

The NYPA would also impose on data controllers duties of loyalty and care – the latter of which would require an annual risk assessment of all of the data controller’s data processing activities – and take direct aim at targeted advertising and data sales, declaring that these activities “shall not be considered processing purposes that are necessary to provide services or goods requested by a consumer.”

“Consumers should have a right to choose if and how their personal information is collected and used by companies,” said Senator Thomas in his reintroduction of the NYPA. “And New Yorkers deserve to know that businesses who are collecting, processing and protecting their personally identifiable information are doing so ethically and responsibly. The New York Privacy Act will set new, groundbreaking standards for comprehensive privacy legislation by advancing consumer privacy rights and creating stronger industry standards that empower businesses to enhance consumer confidence by putting privacy and security front-and-center.”

Below is a rundown of the NYPA’s key components:

  • Application: The NYPA would apply to legal persons that conduct business in New York State or produce products or services intentionally targeted to residents in New York State and that satisfy at least one of the following thresholds:
    • have annual gross revenue of $25M or more;
    • control or process personal data of at least 100,000 New York residents;
    • control or process personal data of at least 500,000 persons nationwide, at least 10,000 of whom are New York residents; or
    • derives over 50% of its gross revenue from the sale of personal data, and controls or processes personal data of at least 25,000 New York residents.
    • Exempt: Exempted from the NYPA are state and local governments, and personal data that is regulated by HIPAA, HITECH, FERPA, DPPA, GLBA and notably, “data sets maintained for employment records purposes, for purposes other than sale”.
  • Personal Data: Similar to the CCPA and CDPA, the NYPA defines personal data broadly to include “any data that is identified or could reasonably be linked, directly or indirectly, with a specific natural person, household, or device”. That said, unlike the CPRA,  CDPA or GDPR, the New York bill does not include a category for “sensitive data” to which heightened protections apply.
  • Consumer: The NYPA defines “consumer” as “a natural person who is a resident of New York acting only in an individual or household context.” The NYPA states that the definition of consumer does not include a “natural person acting in a commercial or employment context.”
  • Consumer Rights: The NYPA provides consumers a broad set of rights over their personal data, including the rights to:
    • receive clear notice of how their data is being used, processed and shared;
    • provide or withhold consent for the processing of their data for any purpose;
    • access and obtain a copy of their data in a commonly used electronic format, with the ability to transfer it between services;
    • correct inaccuracies in their data;
    • delete their data; and
    • challenge certain automated decisions.
  • Notice to Consumers: Under the NYPA, data controllers must provide written notice to consumers when processing their personal data in an “easy-to-understand language at an eighth-grade reading level or below.” This notice must include a description of the consumers’ rights, the categories of personal data processed, the sources of that data, the purposes for which the data is processed, and the identities of all outside parties to whom the data is disclosed, as well as information about how those parties will use the data and how long they will retain it. The notice must be dated with its effective date and updated at least annually. The notice (as well as each version of the notice dating back six years) must be made readily available to consumers
  • Non-Discrimination: The NYPA prohibits discrimination against a consumer who exercises their rights under the law. For example, a business may not target the consumer by denying goods or services or charging a higher price.
  • Data Broker Registry: The NYPA requires data brokers to register, pay an annual fee to the Attorney General, and submit information regarding their data use practices and contact information. The Attorney General must maintain a data broker registry on its website. Additionally, controllers must annually submit a list of all known data brokers or persons reasonably believed to be data brokers with whom the controller provided personal data in the preceding year and can only share personal data with data brokers that are properly registered.
  • Data Security: At least annually, under the NYPA, data controllers are required to conduct and document risk assessments of all current processing of personal data. In addition, data controllers must develop, implement, and maintain reasonable safeguards to protect the security, confidentiality and integrity of the personal data of consumers including adopting reasonable administrative, technical and physical safeguards appropriate to the volume and nature of the personal data at issue. The NYPA also imposes requirements related to data retention, data disposal and vendor management.
  • Enforcement and Private Right of Action: The NYPA authorizes the Attorney General to bring an action or special proceeding whenever it appears that a person has engaged or is about to engage in a violation of the law, with civil penalties of not more than $15,000 per violation (each instance of unlawful processing counts as a separate violation). And unlike comparable state laws, the NYPA would grant consumers a private right of action to enjoin violations of their rights under the law and to seek the greater of actual damages or liquidated damages in the amount of $1,000, along with attorney’s fees.  Contrary to other state consumer privacy bills introduced of late, such as Florida’s recently failed HB 969 or New York’s Biometric Privacy law, an organization found to have violated the NYPA does not have the opportunity to cure the violation before facing enforcement actions or litigation.

States across the country are contemplating ways to enhance their data privacy and security protections, with New York playing a leading role.  In addition to the reintroduction of the NYPA, there are other consumer privacy bills under consideration by the New York state legislature, and the New York City Council recently passed a data privacy bill that would impose rigorous requirements on owners of “Smart Access” buildings, and also created biometric information collection requirements for retail and hospitality businesses similar in kind to Illinois’s infamous Biometric Information Privacy Act (“BIPA”). Organizations, regardless of their location, should be assessing and reviewing their data collection activities, building robust data protection programs, and investing in written information security programs.

 

As we noted in our last post, there has been a flurry of data privacy and security activity in New York, with the State appearing poised to join California as a leader in this space.  Most recently, on April 29, 2021, the New York City Council passed the Tenant Data Privacy Act (“TDPA”), which would impose on owners of “smart access” buildings obligations related to their collection, use, safeguarding, and retention of tenant data.

Under the TDPA, a “smart access” building is one that uses electronic or computerized technology (e.g., a key fob), radio frequency identification cards, mobile phone applications, biometric information (e.g., fingerprints, voiceprints, hand or face geometry), or other digital technology to grant entry to the building, or to common areas or individual dwelling units therein.  The TDPA would require owners of smart access buildings to develop and maintain policies and procedures to address the following requirements:

  1. Express Consent. Before collecting “reference data” from a tenant for use in connection with the building’s smart access system, the building owner would be required to obtain the tenant’s express consent “in writing or through a mobile application.”  “Reference data” is the data used by the system to verify that the individual seeking access is authorized to enter.  Even after obtaining consent, the owner would only be permitted to collect the minimum amount of data necessary to enable the smart access system to function effectively.
  2. Privacy Policy. Building owners would also need to provide a “plain language” privacy policy to its tenants that includes certain disclosures, including disclosure of the data elements that the system collects, the third parties that data is shared with, how the data is protected, and how long it will be retained.
  3. Stringent Security Safeguards. Additionally, the TDPA would require building owners to implement robust security measures and safeguards to protect the data of its tenants, guests, and other users of the smart access system.  At a minimum, these security measures would need to include data encryption, a password reset capability (if the system uses a password), and regularly updated firmware to address security vulnerabilities.
  4. Data Destruction. With limited exceptions, building owners would need to destroy any “authentication data” collected through their smart access systems no later than 90 days after collection.  “Authentication data” is the data collected from the user at the point of authentication, excluding any data generated through or collected by a video or camera system used to monitor entrances, but not to grant entry.

The TDPA would impose strict limits on the categories of tenant data that building owners would be permitted to collect, generate, or utilize through their smart access systems.  Specifically, they would only be permitted to collect:

  • the user’s name;
  • the dwelling unit number and that of other doors or common areas to which the user has access;
  • the user’s preferred method of contact;
  • the user’s biometric identifier information (if the smart access system utilizes such information);
  • the identification card number or any identifier associated with the physical hardware used to facilitate building entry (e.g., Bluetooth);
  • passwords, passcodes, usernames and contact information used singly or in conjunction with other reference data to grant the user access;
  • lease information, including move-in and, if available, move-out dates; and
  • the time and method of access (but solely for security purposes).

Building owners would also be prohibited, subject to certain exceptions, from selling, leasing, or otherwise disclosing tenant data to any third parties.  Building owners that wish to engage third-party vendors to operate or facilitate use of their smart access systems would be required to first (a) provide to users the name of the vendor, the intended use of user data by the vendor, and a copy of the vendor’s privacy, and (b) obtain the users’ express written authorization to disclose the users’ data to the vendor.

Significantly, the TDPA would also create a private right of action for tenants whose data is unlawfully sold.  Such tenants would be empowered to seek either compensatory damages or statutory damages ranging from $200 to $1,000 per tenant, along with attorneys’ fees.

Unless vetoed by the City’s Mayor, the TDPA will take effect at the end of June 2021, though building owners will be granted a grace period until January 1, 2023, to develop their compliance programs and replace or upgrade their smart access systems.  Building owners should use that time wisely, as the TDPA’s requirements will, in many instances, be a heavy lift.

The California Privacy Protection Act (CPRA) amended the California Consumer Privacy Act (CCPA) and has an operative date of January 1, 2023. The CPRA introduces new compliance obligations including a requirement that businesses conduct risk assessments. While many U.S. companies currently conduct risk assessments for compliance with state “reasonable safeguards” statutes (e.g., Florida, Texas, Illinois, Massachusetts, New York) or the HIPAA Security Rule, the CPRA risk assessment has a different focus. This risk assessment requirement is similar to the EU General Data Protection’s (GDPR) data protection impact assessment (DPIA).

The goal of conducting a CPRA risk assessment is to restrict or prohibit the processing of personal information where the risks to a consumer’s privacy outweigh any benefits to the consumer, business, stakeholders, and public. Notably, the CPRA does not limit risk assessments to activities involving the processing of sensitive data. In addition to conducting the actual risk assessment, this process will require a preliminary determination of which data processing activities may present a significant risk to privacy rights. The business must document these risk assessments for submission to the California Privacy Protection Agency on a regular basis.

Under the CPRA, the documented risk assessment shall:

  • include whether the processing involves consumers’ sensitive personal information (e.g., social security, driver’s license, state identification card, or passport number; account log-in, financial account, debit card, or credit card number in combination with security or access code, password, or credentials for account; precise geolocation; racial or ethnic origin, religious or philosophical beliefs, or union membership; contents of mail, email, and text messages unless the business is the intended recipient of the communication; genetic data; biometric information processed for the purpose of uniquely identifying a consumer; information related to health, sex life or orientation); and
  • identify and weigh the benefits to the business, consumer, other stakeholders, and the public from the processing against the potential risks to the rights of the consumer whose data is being processed.

The CPRA directs the California Attorney General and California Privacy Protection Agency to issue implementing regulations, including regulations related to risk assessments. These regulations must be adopted by July 1, 2022 and will likely provide further guidance on the scope of and process for conducting and documenting risk assessments.

Complying with the CPRA will require expanded data mapping and advance planning, some of which may occur prior to issuance of the implementing regulations. During this time, businesses may find the GDPR instructive, particularly since the CCPA and CPRA borrow liberally from the regulation.

Under the GDPR and related guidelines, a DPIA is required or recommended where data processing is likely to result in a high risk to the privacy rights of individuals. This includes activities that

  • use automated processing, including profiling, to evaluate an individual’s personal aspects and on which decisions are based that produce significant effects
  • include large scale processing of sensitive data
  • process data on a large scale
  • match or combine datasets
  • process data of vulnerable individuals (e.g., children)
  • innovate or use new technologies

The DPIA must document and include

  • a description of the processing operations
  • the purposes of the processing
  • the legitimate interest pursued by the business, where applicable
  • an assessment of the necessity and proportionality of the processing activity in relation to the purposes
  • an assessment of the risks to the individual’s privacy rights
  • measures designed to address the risks, including safeguards, security measures and mechanisms to ensure the protection of personal data

The CCPA and CPRA currently exclude employee personal information from certain provisions (e.g., the right to opt out, right to delete). This carve-out exempts employee personal information from the risk assessment requirement outlined above; however, the carve-out is due to expire on January 1, 2023. As businesses begin developing their risk assessment programs, they will want to monitor whether this exclusion for employee information will be extended and/or amended and how it might impact the risk assessment process.

As noted above, the operative date of the CPRA is January 1, 2023. Implementing regulations must be adopted by July 1, 2022 and civil and administrative enforcement activity can commence on July 1, 2023.

For additional information on the CPRA, please reach out to a member of our Privacy, Data and Cybersecurity practice group or check out our CPRA blog series.

How To Do a Colorado DMV Change of Address | Moving.comColorado recently became the latest state to consider a comprehensive consumer privacy law.  On March 19, 2021, Colorado State Senators Rodriguez and Lundeen introduced SB 21-190, entitled “an Act Concerning additional protection of data relating to personal privacy”. Following California’s bold example of the California Consumer Privacy Act (“CCPA”) effective since January 2020, Virginia recently passed its own robust privacy law, the Consumer Data Protection Act (“CDPA”), and New York, as well as other states, like Florida, appear poised to follow suit.  Furthermore, California is expanding protections provided by the CCPA, with the California Privacy Rights Act (CPRA) – approved by California voters under Proposition 24 in the November election.

Unsurprisingly, Colorado’s SB 21-190 generally tracks the CCPA, CPDA, CPRA and the EU General Data Protection Regulation (GDPR).  Key elements of the Colorado bill include:

  • Jurisdictional Scope. SB 21-190 would apply to legal entities that conduct business or produce products or services that are intentionally targeted to Colorado residents and that either:
    • Control or process personal data of more than 100,000 consumers per calendar year; or
    • Derive revenue from the sale of personal data and control or process the personal data of at least 25,000 consumers.
  • Exemptions. SB 21-190 includes various exemptions related to healthcare entities and health data, such as protected health information under HIPAA, patient identifying information maintains by certain substance abuse treatment facilities, and identifiable private information collected in connection with human subject research. Additional exemptions include without limitation personal data collected for the purposes of the Gramm Leach Bliley Act (GLBA), Driver’s Privacy Protection Act (DPPA), Children’s Online Privacy Protection Act (COPPA), Family Educational Rights Act and Privacy Act. Finally, data maintained for employment records purposes are exempted as well.
  • Personal Data. Similar to its counterparts, Colorado’s SB 21-190 broadly defines personal data to mean “information that is linked or reasonably linkable to an identified or identifiable individual.”
  • Sensitive Data. Like the CPDA, CPRA and GDPR, SB 21-190 includes a category for “sensitive data”. This is defined as “personal data revealing racial or ethical origin, religious beliefs, a mental or physical health condition or diagnosis, sex life or sexual orientation, or citizenship or citizenship status OR genetic or biometric data that may be processed for the purpose of uniquely identifying an individual OR personal data from a known child”. As with Virginia’s CPDA, there are two key compliance obligations related to “sensitive data”.  First, sensitive data cannot be processed without obtaining consumer consent, or in the case of a known child or student, without obtaining consent from a parent or lawful guardian.  Second, the controller must conduct and document a data protection assessment specifically for the processing of sensitive data.
  • Protected Persons. SB 21-190 defines “consumer” as an “individual who is a Colorado resident acting only in an individual or household context”. The Colorado bill states that the definition of consumer does not include “an individual acting in a commercial or employment context”.
  • Consumer Rights. Under SB 21-190, consumers have the right to opt out of the processing of their personal data; access, correct, or delete the data; or obtain a portable copy of the data.
  • Data Protection Assessments. Akin to Virginia’s CPDA, the Colorado bill requires data controllers to conduct a data protection assessment for each of their processing activities involving personal data that presents a heightened risk of harm to consumers, such as processing for purposes of targeted advertising or processing sensitive data (as mentioned above).
  • Enforcement. If enacted, SB 21-190 would only be enforceable by the Colorado attorney general or district attorneys. A violation of law could result in a civil penalty of not more than $2,000 for each such violation (not to exceed $500,000 for any related series of violations), or injunction.

Colorado’s SB 21-190 is in the early stages of the legislative process, still it signals the continued momentum building in states across the country to enhance consumer data privacy and security protections. Organizations, regardless of their location, should be carefully assessing their data collection activities, developing policies and procedures to address their evolving compliance obligations and data-related risks, and training their workforce on effective implementation of those policies and procedures.