Efforts to secure systems and data from a cyberattack often focus on measures such as multifactor authentication (MFA), endpoint monitoring solutions, antivirus protections, and role-based access management controls, and for good reason. But there is a basic principle of data protection that when applied across an organization can significantly reduce the impact of a data incident – the minimum necessary principle. A data breach reported late last year by the Rhode Island Public Transit Authority (RIPTA) highlights the importance of this relatively simple but effective tool.

In December 2021, RIPTA sent notification of a data breach to several thousand individuals who were not RIPTA employees. Reports of the incident prompted inquiries from a state Senator in Rhode Island, Louis P. DiPalma, and union officials who represented the affected individuals. According to Rhode Island’s Department of Administration (DOA), a forensic analysis conducted in connection with the incident indicates the affected files included health plan billing records pertaining to State of Rhode Island employees, not RIPTA employees. The DOA goes on to state that:

[s]tate employee data was incorrectly shared with RIPTA by an external third party who had responsibility for administering the state’s health plan billing.

An investigation is underway to confirm exactly what happened. The content of recent conversations between state officials and union representatives reported in the press indicate that an RIPTA payroll clerk received a file containing state employee health plan data in August 2020, stored it on the employee’s hard drive, where it remained until August 2021, when the cyberattack on RIPTA occurred. It is unclear why the employee received the information, from whom, or whether it was appropriate to maintain it.

Regardless, the “minimum necessary” principle, simply stated, requires that organizations take reasonable steps so that confidential and personal information are only accessed, used, maintained, or disclosed to carry out the applicable business functions. Consider, for example, that retention policies are becoming increasingly important from a compliance perspective, such as with regard to the California Privacy Rights Act of 2020 (CPRA), which amends and supplements the California Consumer Privacy Act (CCPA), the EU General Data Protection Regulation (GDPR), and the Illinois Biometric Information Privacy Act (BIPA).  This principle can be applied at multiple points in the operations of the organization, including without limitation:

  • When requesting information. Think about what elements of information the organization collects from customers, students, patients, vendors, employees, and others. Is it more information than is needed to carry out the purpose(s) for the collection? Can portals, forms, etc. be modified to limit the information collected?
  • When receiving information. Employees cannot always control the information they receive from parties outside the organization. But when they do, what steps or guidelines are in place to determine what is needed and what is not needed? For information that is not needed, what is the process for alerting the sender, if necessary, returning the data, and/or removing it from the systems?
  • When using information. Employees carry out many critical business functions that require the use of confidential and personal information. Do they always need all of it? Are there instances where less information can be sufficient for the processing of an important business function.
  • When storing information. The task at hand has been completed and the question becomes what information should be retained. The answer can be a complex web of legally mandated retention requirements, contractual obligations, business needs, and other considerations. But organizations should carefully analyze these issues an establish protocols for employees to follow. Note that under the CPRA, a covered business may not retain a consumer’s personal information for longer than is reasonably necessary for the stated purpose it was collected.
  • When responding to requests or disclosing information. Whether engaging in billing and collection activities, responding to an attorney demand letter, reporting information to the government, administering benefit plans for employees, or any number of other typical business functions, organizations make disclosures of confidential and personal information. Important questions to ask are (i) what data does the requesting party really need, (ii) what classifications of information are actually in the file being disclosed and are there limitations on the disclosure of that information, and (iii) whether the response or disclosure can have the same effect with less data.

In thinking about these questions, there may not be a clear right or wrong answer to whether the information should or should not have been collected, used, stored, or disclosed. However, from a risk management perspective, it is helpful to review business procedures, practices, operations, forms, etc. for ways to minimize exposure to confidential and personal information. Applying the minimum necessary principle can be an effective way of minimizing the organization’s data footprint so that should it experience a security incident, there is the possibility for less data to be compromised.

The use of smart dashcams and vehicle cameras, including those leveraging AI technology, may trigger the next wave of BIPA litigation, according to two cases filed in Illinois this week.

Enacted in 2008, the Illinois Biometric Information Privacy Act, 740 ILCS 14 et seq. (the “BIPA”), went largely unnoticed until a few years ago when a handful of cases sparked a flood of putative class action litigation over the collection, use, storage, and disclosure of biometric information. Many of these cases were filed by plaintiffs who alleged BIPA violations when time management devices called for them to swipe their finger to clock in or out of work. Use of those devices, many plaintiffs claim, resulted in the collection of their fingerprints without the corresponding notice, consent, and other measures required under the BIPA. The focus may be shifting to a new technology: AI-powered dashcams.

Organizations whose employees drive regularly to perform job functions raise several issues – safety, productivity, loss prevention, expense reimbursement, among others. For these reasons, some organizations deploy telematics and related technologies to better manage their fleets. A tool in this process is the vehicle camera, such as dashcams, that are capable of monitoring (and recording) video and/or audio of the driver, passengers, and in some cases persons outside the vehicle. These devices also can track location and how a vehicle is being driven – hard acceleration, sharp turns, lane changing, etc. But, it is the use of AI and machine learning technologies that is raising questions about whether biometric identifiers and/or information are being collected.

According to at least one of these recently filed complaints, the vehicle camera does not just take a traditional video recording of the driver. It uses AI and machine learning technologies to detect driver behavior. More specifically, product descriptions claim the intelligent cameras can identify if drivers are inattentive, distracted, or tired through facial mapping technology which scans the geometry of the face and analyzes the resulting data.

Under BIPA, a “biometric identifier” generally means “a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry” and “biometric information,” means “any information, regardless of how it is captured, converted, stored, or shared, based on an individual’s biometric identifier used to identify an individual.

It is unclear at this point whether these complaints have any merit, however, organizations that are using AI-powered vehicle cameras should be reviewing that technology carefully with their vendors to understand the nature and extent of the data being collected. For assistance with understanding the legal framework concerning biometric information, please see our Biometric Law Map

The CCPA has reached the two-year mark. This is a good time for businesses to review the success of their compliance programs, recalibrate for the CCPA’s third year, and gear up for the CPRA’s January 1, 2023 effective date.

Here are a few suggestions:

  1. Privacy Policies. The CCPA requires a business to update the information in its privacy policy or any California-specific description of consumers’ privacy rights at least once every twelve months. If your business has not already done so, now is a good time to review both online and offline data collection practices to ensure privacy policies accurately disclose, at a minimum, the categories of personal information (“PI”) collected in the preceding 12 months, the categories of PI sold in the preceding twelve months, and the categories of PI it disclosed for a business purpose in the last 12 months.

Given the challenges of the last few months, your business may be collecting PI beyond what it currently discloses in its privacy policies. For example, the business may need to update its privacy policies to disclose the collection and use of COVID-19 related screening information, biometric information, or PI collected as a result of remote work situations.

If your business needs to update its privacy policy to reflect additional data collection activities, it will likely need to update its “notice at collection”, including employee and job applicant privacy notices.

  1. Employee training. The CCPA requires that a business ensure all employees handling inquiries about consumer rights, the businesses’ privacy practices, or its compliance with the CCPA are informed of applicable CCPA requirements. Businesses will want to
  • review training programs to ensure they include appropriate CCPA related content;
  • determine whether employee handbooks and manuals have been updated accordingly; and,
  • document that relevant employees have received training.
  1. Reasonable Safeguards. The CCPA does not currently create an affirmative obligation to implement reasonable safeguards for protecting consumer PI; however, it provides a private right of action to consumers whose PI has been involved in a data breach resulting from the business’s failure to implement reasonable security safeguards. With this in mind, your business will want to review whether it has
  • performed an annual risk assessment to identify new or enhanced risks, threats, or vulnerabilities to its systems or the PI it collects or maintains;
  • reviewed and updated its written information security program and data retention schedule;
  • practiced its incident response plan; and
  • updated its vendor management program to address cyber-based risk.

CCPA compliance is an ongoing activity, and these action items are worthy of review at the one-year mark. However, further year-end review might also include

  • an assessment of the business’s website’s accessibility;
  • confirmation that service provider agreements have been amended to satisfy the CCPA; and
  • incorporation of relevant CCPA provisions in new service provider contracts.

Although the CCPA does not mandate implementing reasonable safeguards, this will change effective January 1, 2023. The CPRA, which amends the CCPA, creates an affirmative duty to do so. Businesses should use the next year to identify what constitutes reasonable safeguards for their data and systems, begin implementing those safeguards, update internal policies and procedures as necessary, and train staff.

The CPRA also amends the CCPA disclosure requirements to include information relating to the collection and use of “sensitive personal information”. In addition, California consumers will have the right to limit the business’s use of this information in certain circumstances, similar to the right to opt out of the sale of personal information. In order to comply, businesses may need to revisit and expand their data mapping to capture sensitive personal information.

These are just two examples that necessitate reviewing your business’s data protection program and setting in motion processes to prepare for the CPRA. We will continue to post on steps your business can take in anticipation of January 1, 2023.

The leaders of our Wage & Hour Practice, Justin Barnes Jeffrey Brecher and Eric Magnus collaborated with us on this article.

According to reports, Kronos, the cloud-based, HR management service provider, suffered a data incident involving ransomware affecting its information systems. Kronos communicated that it discovered the incident late on Saturday, December 11, 2021, when it “became aware of unusual activity impacting UKG solutions using Kronos Private Cloud.”   Shortly after,  Kronos issued a helpful Q & A for customers impacted by the incident. The company confirmed:

[T]his is a ransomware incident affecting the Kronos Private Cloud—the portion of our business where UKG Workforce Central, UKG TeleStaff, Healthcare Extensions, and Banking Scheduling Solutions are deployed. At this time, we are not aware of an impact to UKG Pro, UKG Ready, UKG Dimensions, or any other UKG products or solutions, which are housed in separate environments and not in the Kronos Private Cloud.

This incident has already impacted time management, payroll processing, and other HR-related activities of organizations using the affected services. Ransomware and similar attacks also could compromise confidential and personal information maintained on affected systems, although there is no indication of that at this point. Clearly, organizations that use these services can be affected in several ways. The FAQs below provide information on some of the key issues these organizations should be thinking about.

Isn’t this really Kronos’ problem?

This certainly is a significant issue for Kronos and, based on communications from Kronos, the company is in the process of remediating the incident and alerting its impacted customers. However, because of the nature and extent of the services Kronos provides to its customers (i.e., employers), there are several issues that HR, IT and other groups inside organizations that are customers of the affected services need to be doing. We address some of those items below.

From a communications perspective, this incident likely will receive significant news coverage, prompting questions from employees about the impact of the incident on their personal information, their schedules, their pay, etc. Employers will need to think carefully about how to respond to these inquiries, especially when there is little known at this point about the incident.

From a compliance perspective, employers should be reviewing and implementing their contingency plans depending on the scope of services received from Kronos. For example, clients using Kronos time management systems should be evaluating what measures they should be implementing to ensure their employees’ time is properly captured and paid. A company has a legal obligation to accurately track hours worked, regardless of whether their third-party vendor (like Kronos) responsible for the task can do so or not. Clients might want to institute, in the short-term, paper timekeeping and tracking systems to ensure that employees are taking appropriate breaks and being paid for all time worked. It would be especially helpful in this situation to have employees sign off that the amount of time they report and the breaks they took are accurate.

From a cybersecurity standpoint, the answer to the question of whether this is only Kronos’ problem likely is no. All 50 states, as well as certain cities and other jurisdictions, have breach notification laws. If there is a breach of security under those laws, there may be a notification obligation. The notification obligation to affected individuals largely rests with the owner of that information, which likely would be employers. We anticipate that if notification is required, Kronos may take the lead on that, although employers will want some assurances that notification will be provided in a time and manner consistent with applicable law.

What should we be doing?

There are several steps employers likely will need to take in response to this incident, not all of which are clear at this point because of what little is currently known. Still, there are some action items affected employers should be considering:

  • Stay informed. Closely follow the developments reported by Kronos, including coordinating with your HR and IT teams.
  • Consult with counsel. Experienced cybersecurity and employment counsel can help employers properly identify their obligations and coordinate with Kronos, as needed.
  • Communicate with employees. Maintaining accurate and consistent communications with employees is critical, especially considering a significant part of the discussions around this incident could be taking place in social media. Your employees and their representatives, where applicable, may already be aware of this incident. To be prepared to address and respond to employee concerns, organizations should consider providing an initial short summary of the incident to potentially impacted individuals as soon as possible. That communication could be expanded over time with more information as it come available, perhaps in the form of FAQs like these. Less is more on the initial communication, again, given what little is known. However, it is important to let employees know the organization is aware of the incident and actively taking steps to mitigate its effects on employees.
  • Review Your Kronos Services and Service Agreement. Begin evaluating the services that the organization receives from Kronos. This will help to implement contingency plans, but also to assess the nature and extent of the information that Kronos maintains on the organization’s behalf. The organization might be able to conclude early on that, while there may be impacted systems and operations, Kronos was not in possession of the kind of personal information pertaining to employees of the organization that could lead to a breach notification obligation. This information could be reassuring for employees. Also, review the services agreement between the organization and Kronos as it may include provisions that have particular relevance here. For example, the agreement may outline a process agreed to between the parties for handling data incidents like this.
  • Review your cyber insurance policy. It might be premature to make a claim against the organization’s cyber policy, assuming the organization has a cyber policy – an important consideration nowadays. But, key stakeholders should review the situation and discuss potential coverage options with the organization’s insurance broker and/or legal counsel. Becoming more familiar with existing cyber insurance policies and coverage is prudent as it might cover some of the costs an organization incurs in connection with incidents like this.
  • Evaluate vendors. What some are asking may have led to the Kronos incident is the “Log4j” vulnerability, however, that has not been confirmed at this time. Log4j is described as a Java library for logging error messages in applications. Because other vendors also may have Log4j exposure, organizations may want to use this incident as a reason to examine more closely the data privacy and security practices of other third-party vendors, regardless of whether the Log4j vulnerability was exploited here. This is particularly the case for those vendors that handle the personal information of employees and customers.
  • Revisit your own data security compliance measures. Organizations also should check their own systems for Log4j and other vulnerabilities and fix them as quickly as possible.

Will the state breach notification laws apply?

We do not know if there has been a “breach” at this point. This will require investigation and analysis of the incident, which we understand is underway at Kronos at this time. However, if the incident affects certain unencrypted personal information of individuals, such as names coupled with social security numbers, drivers’ license numbers, financial account numbers, medical information, biometric information or certain other data elements, state breach notification laws may apply. Organizations that utilize Kronos’ services globally must consider a broader definition of personal data, such as under the General Data Protection Regulation (GDPR).

Thousands of organizations have suffered similar attacks, all of which illustrate the importance of planning for a response, not only trying to prevent one. Third party service providers play important roles for most organizations, particularly with regard to their HR systems and corresponding operations. It will take some time to work through this incident, but it should be a reminder for all affected organizations to continue to develop, refine, and practice their contingency plans.

On September 17, 2021, a three-judge panel of the Illinois Appellate Court for the First Judicial District issued a long-awaited decision regarding the statute of limitations for claims under the state’s Biometric Information Privacy Act (“BIPA”) in Tims v. Black Horse Carriers, Inc. The Tims decision marks the first appellate guidance regarding this issue.  Although the BIPA is silent as to the applicable statute of limitations, the panel concluded that claims brought under section 15(a), (b), and (e) of the statute, which are the claims requiring companies to have a publicly available policy, obtain informed consent, and reasonably safeguard biometric data, are subject to a five-year limitations period.  BIPA claims brought under sections 15(c) and (d) of the statute, which are the claims which prohibit profiting from the use of biometric data or disclosure of biometric data are subject to a one-year statute of limitations.

In reaching its split decision regarding the applicable statute of limitations, the panel noted that each duty under the BIPA is “separate and distinct,” and that a private entity “could violate one of the duties while adhering to others.”  The panel further opined that “a plaintiff who alleges and eventually proves violation[s] of multiple duties could collect multiple recoveries of liquidated damages.” The panel looked to the text of the BIPA without consideration of the legislative history of the statute, and precedent, including the Illinois Supreme Court’s decision in Rosenbach v. Six Flags Entertainment Corp., in reaching its conclusion.

Section 13-201 of the Illinois Code of Civil Procedure provides that there is a one-year statute of limitations for “actions for slander, libel or for publication  matter violating the right of privacy,” while section 13-205 has a five-year “catchall” statute of limitations for “all civil actions not otherwise provided for.”  The panel concluded that 13-201 does not apply to all privacy actions, but rather only privacy actions “where publication is an element or inherent part of the action.”  On these grounds, the panel determined that section 13-201’s one-year statute of limitations only applies to BIPA claims under sections 15(c) and (d) of the statute, which prohibit entities from “sell[ing], leas[ing], trad[ing], or otherwise profit[ing] from” or disclosing biometric data. With respect to those claims, the panel held that “publication or disclosure of biometric data is clearly an element of an action.”

Conversely, the panel concluded that claims under sections 15(a), (b), and (e) “have absolutely no element of publication or dissemination,” and thus, the five-year “catchall” statute of limitations applies.

In Tims, the First District was not asked, nor did it decide, the issue of when a claim under the BIPA accrues.  However, the accrual issue is currently the subject of an appeal before the federal Seventh Circuit Court of Appeals in Cothron v. White Castle.  The Seventh Circuit heard oral argument in Cothron on September 14, 2021, and has been asked by the plaintiff-appellant to certify the accrual issue to the Illinois Supreme Court for consideration.  In Marion v. Ring Container, the Illinois Appellate Court for the Third Judicial District is set to decide whether a one-year, two-year, or five-year statute of limitations applies to claims under the BIPA.  The Marion appeal is currently stayed pending a decision in McDonald v. Symphony Bronzeville, in which the Illinois Supreme Court will decide with finality whether BIPA claims arising in the employment context are preempted by the Illinois Workers’ Compensation Act.

There has been an influx of biometric privacy litigation in recent years. Private entities that collect, use, and store biometric data increasingly face compliance obligations as the law attempts to keep pace with ever-evolving technology. Creating a robust privacy and data protection program or regularly reviewing an existing one can mitigate risk and ensure legal compliance.

 

Yesterday, Baltimore’s local ordinance prohibiting persons from “obtaining, retaining, accessing, or using certain face surveillance technology or any information obtained from certain face surveillance technology,” became effective.  The new ordinance prohibits the use of facial recognition technology by city residents, businesses, and most of the city government (excluding the city police department) until December 2022. Baltimore joins a growing list of localities regulating private use of facial recognition technology including Portland (Oregon), and New York City.

Specifically, the Baltimore ordinance prohibits an individual or entity from obtaining, retaining, or using facial surveillance system or any information obtained from a facial surveillance system within the boundaries of Baltimore city. “Facial surveillance system” is defined as any computer software or application that performs face surveillance. Notably, the Baltimore ordinance explicitly excluded from the definition of “facial surveillance system” a biometric security system designed specifically to protect against unauthorized access to a particular location or an electronic device, meaning organizations using a biometric security system for employee/visitor access to their facilities would appear to be still be permissible under the bill. The ordinance also excludes from its definition of “facial surveillance system” the Maryland Image Repository System (MIRS) used by the Baltimore City Police in criminal investigations.

Significantly, a person in violation of the law is subject to fine of not more than $1,000, imprisonment of not more than 12 months, or both fine and imprisonment.  Each day that a violation continues is considered a separate offense. The criminalization of use of facial recognition, is first of its kind across the United States.

Businesses in the City of Baltimore should be evaluating whether they are using facial recognition technologies, whether they fall into one of the exceptions in the ordinance, and if not what alternatives they have for verification, security, and other purposes for which the technology was implemented. An earlier post providing details and analysis of the Baltimore prohibition on face surveillance technology is available here.

URL

Facial recognition technology has become increasingly popular in recent years in the employment and consumer space (e.g. employee access, passport check-in systems, payments on smartphones), and in particular during the COVID-19 pandemic. As the need arose to screen persons entering a facility for symptoms of the virus, including temperature, thermal cameras, kiosks, and other devices with embedded with facial recognition capabilities were put into use. However, many have objected to the use of this technology in its current form, citing problems with the accuracy of the technology, and now, more alarmingly, there is growing concern that “Faces are the Next Target for Fraudsters” as summarized by a recently article in the Wall Street Journal (“WSJ”).

In the last year, there has been an uptick in hackers trying to “trick” facial recognition technology, in a myriad of settings, such as fraudulently claiming unemployment benefits from state workforce agencies, The majority of states are now using facial recognition technology as a way to verify to eligible citizens, ironically enough, in order to prevent other types of fraud. As discussed in the WSJ article, the firm ID.me.Inc. which provides facial recognition software for 26 states to help verify individuals eligible for unemployment benefits has seen between June 2020 – January 2021 over 80,000 attempts to fool government identification facial recognition systems.  Hackers of facial recognition systems use a myriad of techniques including deepfakes (AI generated images), special masks, or even holding up images or videos of the individual the hacker is looking to impersonate.

Fraud is not the only concern with facial recognition technology.  Despite its appeal for employers and organizations, there are concerns regarding the accuracy of the technology, as well as significant legal implications to consider.  First, there are growing concerns regarding accuracy and biases of the technology.  A recent report by the National Institute of Standards and Technology studied 189 facial recognition algorithms which is considered the “majority of the industry”.  The report found that most of the algorithms exhibit bias, falsely identifying Asian and Black faces 10 to beyond 100 times more than White faces.  Moreover, false positives are significantly more common in woman than men, and more elevated in elderly and children, than middle-aged adults.

In addition, several U.S. localities have already banned the use of facial recognition for law enforcement, other government agencies, and/or private and commercial use.  The City of Baltimore, for example, recently banned the use of facial recognition technologies by city residents, businesses, and most of the city government (excluding the city police department) until December 2022.  Council Bill 21-0001  prohibits persons from “obtaining, retaining, accessing, or using certain face surveillance technology or any information obtained from certain face surveillance technology.” Likewise in September of 2020 the City of Portland in Oregon became the first city in the United States to ban the use of facial recognition technologies in the private sector citing, among other things, a lack of standards for the technology and wide ranges in accuracy and error rates that differ by race and gender. Failure to comply can be painful. The Ordinance provides persons injured by a material violation a cause of action for damages or $1,000 per day for each day of violation, whichever is greater.

And finally, companies looking to implement facial recognition technologies, must consider their obligations under laws such as the Illinois’ Biometric Information Privacy Act (BIPA) and the California Consumer Privacy Act (CCPA). The BIPA addresses a business’s collection of biometric data from both customers and employees including for example facial recognition, finger prints, and voice prints.  The BIPA requires informed consent prior to collection of biometric data, mandates protection obligations and retention guidelines, and creates a private right of action for individuals aggrieved by BIPA violations which has resulted in a flood of BIPA class action litigation in recent years.  Texas, Washington and California also have similar requirements, New York is considering a BIPA-like privacy bill and NYC recently created BIPA-like requirements for retail, hospitality businesses concerning biometric collection from customers. Additionally, states are increasingly amending their breach notification laws to add biometric information to the categories of personal information that require notification, including 2020 amendments in California, D.C., and Vermont. Moreover, there are a myriad of data destruction, reasonable safeguards, and vendor requirements to consider, depending on the state, when collecting biometric data.

Takeaway

Facial recognition and other biometric data related technology is booming, and continues to infiltrate different facets of life that are hard to even contemplate. The technology brings innumerable potential benefits as well as significant data privacy and cybersecurity risks. Organizations that collect, use, and store biometric data increasingly face compliance obligations as the law attempts to keep pace with technology, cybersecurity crimes, and public awareness of data privacy and security. Creating a robust privacy and data protection program or regularly reviewing an existing one is a critical risk management and legal compliance step.

Colorado is officially the third U.S. state to enact comprehensive privacy legislation, following California and Virginia. The Colorado General Assembly passed the Colorado Privacy Act (CPA), Senate Bill 21-109, on June 8, 2021, and Governor Jared Polis signed it into law on July 7, 2021.

The Colorado Privacy Act takes effect July 1, 2023, six months after the Virginia Consumer Data Protection Act (VCDPA) and California Privacy Rights Act (CPRA).

Applicability

The CPA provides new obligations on Controllers—that is, any entity that (i) determines the purposes and means of processing personal data, (ii) conducts business in Colorado or produces or delivers commercial products or services intentionally targeted to residents of the state, and (iii) either:  (a) controls or processes the personal data of more than 100,000 Colorado residents per year or (b) derives revenue from selling the personal data of more than 25,000 Colorado residents.

It also provides new rights to Consumers—or, any individual who is a Colorado resident acting in an individual or household context.

The CPA does not apply to data that is subject to other federal privacy laws such as the Health Insurance Portability and Accountability Act (HIPAA), the Children’s Online Privacy Protection Act (COPPA), the Gramm-Leach-Bliley Act (GLBA), the Family Educational Rights and Privacy Act (FERPA), and the Securities Exchange Act of 1934. The CPA also exempts employment data, higher education institutions, nonprofits, state and local governments, and public utility customer records (so long as they are not sold).

Consumer Rights under the Colorado Privacy Act

The rights the CPA affords to Consumers are similar to those in the VCDPA and CCPA/CPRA.

In broad strokes, the CPA regulates the use of and disclosures surrounding “personal data,” which includes information that is linked, or reasonably linkable, to an identifiable person, and “sensitive data,” which includes data revealing racial or ethnic origin, religious beliefs, a mental or physical health condition, sexual orientation, citizenship, genetic or biometric data, or personal data from a known child.

The CPA empowers Consumers with new controls over their data, including the right to:

  1. opt out of the processing of certain personal data;
  2. access personal data (up to twice per calendar year);
  3. correct inaccurate data;
  4. delete personal data; and
  5. data portability.

Controller Duties under the Colorado Privacy Act

Similarly, the CPA creates duties for Controllers, including the:

  • Duty of transparency;
  • Duty of purpose specification;
  • Duty of data minimization;
  • Duty to avoid secondary use;
  • Duty to avoid unlawful discrimination; and
  • Duty regarding sensitive data.

In addition, while Consumers may request access to their personal data, Controllers may not require that a Consumer create a new account in order to exercise this right (or retaliate with increased cost or decreased availability of a product or service ).  When responding to Consumer data requests, Controllers must:

  • Take action on the Consumer’s request without undue delay and within 45 days of receiving the request—with few exceptions.
  • Develop an internal process for Consumers to appeal refusals of data requests.
  • Notify the Consumer that it may contact the Colorado Attorney General if the Consumer has concerns about the result of the response and outcome of appeal.

Controllers must also conduct data protection assessments for each processing activity involving a heightened risk of harm to Consumers, including:

  • The sale of personal data;
  • Processing of sensitive data; or
  • Processing personal data for targeted advertising if it could lead to unfair or deceptive treatment or have a disparate impact on Consumers, financial or physical injury, physical or other intrusion upon seclusion, or other substantial injury

Controllers must present these data protection assessments to the CO Attorney General upon request.

Enforcement

One key difference between the CPA and California and Virginia privacy laws is that the CPA is enforceable by both the district attorney and office of the attorney general. This broadened enforcement mechanism could lead to greater scrutiny of affected businesses.

Unlike the CCPA, the CPA does not include a private right of action. The attorney general or district attorney may, however, institute a civil action or pursue injunctive relief. Failure to comply with the CPA may be considered a deceptive trade practice. Financial penalties are left to the discretion of the courts.

Key Takeaways

Colorado may be only the third state to enact comprehensive privacy legislation, but other states will likely be soon to follow. Differences between the CPA, VCDPA, and CPRA are subtle, and there are plenty of technical details to sift through. While this may ease the burden of compliance, companies still need to ensure their data collection activities fully comply with the provisions of each privacy act.

And with more states likely to follow suit, data privacy compliance will only get more complicated.

Please contact a Jackson Lewis attorney with any questions.

* Jackson Biesecker is a law clerk in our Privacy, Data & Cybersecurity Practice Group that contributed substantially to this article.

 

 

The Baltimore City Council recently passed an ordinance, in a vote of 13-2, barring the use of facial recognition technology by city residents, businesses, and most of the city government (excluding the city police department) until December 2022.  Council Bill 21-0001  prohibits persons from “obtaining, retaining, accessing, or using certain face surveillance technology or any information obtained from certain face surveillance technology.”

Facial recognition technology has become more popular in recent years, including during the COVID-19 pandemic. As the need arose to screen persons entering a facility for symptoms of the virus, including temperature, thermal cameras, kiosks, and other devices embedded with facial recognition capabilities were put into use, often inadvertently. However, many have objected to the use of this technology in its current form, citing problems with the accuracy of the technology, as summarized in a June 9, 2020 New York Times article, “A Case for Banning Facial Recognition.”

While many localities across the nation have barred the use of facial recognition systems by city police, and other government agencies, such as San Francisco and Oakland, Baltimore is only the second city (following Portland, Oregon), to ban biometric technology use by private residents and businesses. Effective January 1, 2021 the City of Portland banned the use of facial recognition by private entities in any “places of public accommodation” within the boundaries of the city. “Places of public accommodation was broadly defined to include any “place or service offering to the public accommodations, advantages, facilities, or privileges whether in the nature of goods, services, lodgings, amusements, transportation or otherwise.”

Specifically, the Baltimore ordinance prohibits an individual or entity from obtaining, retaining, or using facial surveillance system or any information obtained from a facial surveillance system within the boundaries of Baltimore city. “Facial surveillance system” is defined as any computer software or application that performs face surveillance. Notably, the Baltimore ordinance explicitly excluded from the definition of “facial surveillance system” a biometric security system designed specifically to protect against unauthorized access to a particular location or an electronic device, meaning employers using a biometric security system for employee/visitor access to their facilities would appear to be still be permissible under the bill. The ordinance also excludes from its definition of “facial surveillance system” the Maryland Image Repository System (MIRS) used by the Baltimore City Police in criminal investigations.

A person in violation of the law is subject to fine of not more than $1,000, imprisonment of not more than 12 months, or both fine and imprisonment.  Each day that a violation continues is considered a separate offense. The criminalization of use of facial recognition, is first of its kind across the United States.

The Baltimore bill also includes a separate section applicable only to the Mayor and City Council of Baltimore City, requiring an annual surveillance report by the Director of Baltimore City Information and Technology or any successor entity, in consultation with the Department of Finance to be submitted to the Mayor of Baltimore detailing: 1) each purchase of surveillance technology during the prior fiscal year, disaggregated by the purchasing agency, and 2) an explanation of the use of the surveillance technology.  In addition, the report must be posted to the Baltimore City Information and Technology website. Examples of surveillance technology that must be included in the report include: automatic license plate readers, x-ray vans, mobile DNA capture technology and software designed to forecast criminal activity or criminality.

It is important to note, that the bill’s provisions are set to automatically expire December 31, 2022 unless the City Council, after appropriate study, including public hearings and testimonial evidence concludes that such prohibitions and requirements are in the public interest, in which case the law will be extended for an additional 5 years.

The Baltimore ordinance has been met with significant opposition by industry experts, particularly as the ordinance would be the first in the U.S. to criminalize private use of biometric technologies. In a joint letter, the Security Industry Association (SIA), the Consumer Technology Associations (CTA) and the Information Technology and Innovation Foundation (ITIF) and XR Association to reject the enactment of the Baltimore ordinance on grounds that it is overly broad and prohibits commercial applications of facial recognition technology that already have widespread public acceptance and provide “beneficial and noncontroversial” services, including for example: increased and customized accessibility for disabled persons, healthcare facilities to verify patient identities while reducing the need for close-proximity interpersonal interactions, banks to enhance consumer security to verify purchases and ATM access, and many more. A similar concern was voiced by Councilmember Issac Schliefer who was one of the two votes opposing the ordinance.

The ordinance now awaits signage by Baltimore Mayor Brandon Scott, and if signed, will become effective 30 days after enactment. In anticipation, of the ordinance’s potential enactment, businesses in the City of Baltimore should begin evaluating whether they are using facial recognition technologies, whether they fall into one of the exceptions in the ordinance, and if not what alternatives they have for verification, security, and other purposes for which the technology was implemented.