Trump Administration To Test Biometric Program To Scan Faces Of Drivers |  Zero Hedge

Earlier this month, our Immigration Group colleagues reported the Department of Homeland Security (DHS) would release a new regulation to expand the collection of biometric data in the enforcement and administration of immigration laws. However, as reported by Roll Call, a DHS Inspector General report raised significant concerns about whether Department is able to adequately protect sensitive biometric information, particularly with regard to its use of subcontractors. The expanded use of biometrics outlined in the Department’s proposed regulation, just as increased use of biometric information such as fingerprint or facial recognition by private organizations, heightens the risk to such data.

The amount of biometric information maintained by DHS is already massive. The DHS Office of Biometric Identity Management maintains the Automated Biometric Identification System, which contains the biometric data repository of more than 250 million people and can process more than 300,000 biometric transactions per day. U.S. Customs and Border Protection (CBP) is mandated to deploy a biometric entry/exit system to record arrivals and departures to and from the United States, with the long-term goal to biometrically verify the identity of all travelers exiting the United States and ensure that each traveler has physically departed the country at air, land, and sea departure locations.

In 2018, CBP began a pilot effort known as the Vehicle Face System (VFS) in part to test the ability to capture volunteer passenger facial images as they drove by at speeds under 20 mph and the ability to biometrically match captured images against a gallery of recent travelers. DHS hired a subcontractor to assist with the development of the technology.

According to the inspector general’s report, DHS has a range of policies and procedures to protect biometric information, which it considers sensitive personally identifiable information (SPII). Among those policies, DHS’ Handbook for Safeguarding Sensitive PII, Privacy Policy Directive 047-01-007, Revision 3, December 2017, requires contractors and consultants to protect SPII to prevent identity theft or other adverse consequences, such as privacy incidents, compromise, or misuse of data information on them.

Despite these policies, the DHS subcontractor engaged to support the pilot directly violated DHS security and privacy protocols when it downloaded SPII, including traveler images, from an unencrypted device and stored it on its own network. The subcontractor obtained access to this data between August 2018 and January 2019 without CBP’s authorization or knowledge. Later in 2019, the subcontractor’s network was subjected to a malicious cyberattack involving ransomware resulting in the compromise of 184,000 facial images of cross-border travelers collected through a pilot program, at least 19 of which were posted on the dark web.

As one of our 10 Steps for Tackling Data Privacy and Security Laws, “Vendors – trust but verify” is critical. For DHS, its failure to do so may damage the public’s trust resulting in travelers’ reluctance to permit DHS to capture and use their biometrics at U.S. ports of entry. Non-governmental organizations that experience a similar situation with one of their vendors face an analogous loss of trust, as well as adverse impacts on business, along with compliance enforcement and litigation risks.

Among the recommendations CBP made following the breach was to ensure implementation of USB device restrictions and to apply enhanced encryption methods. CBP also sent a memo requiring all IT contractors to sign statements guaranteeing compliance with contract terms related to IT and data security.  Like DHS, more organizations are developing written policies and procedures following risk assessments and other best practices. However, it is not enough to prepare and adopt policies, implementation is key.

A growing body of law in the United States requires not only the safeguarding of personal information, including biometric information, by organizations that own it, but also by the third-party service providers that process it on behalf of the owners. Carefully and consistently managing vendors and their access, use, disclosure, and safeguarding of personal information is a critical part of any written information security program.

Whether it is facial recognition technology being used in connection with COVID-19 screening tools and in law enforcement, continued use of fingerprint-based time management systems, or the use of various biometric identifiers for physical security and access management, applications involving biometric identifiers and information in the public and private sectors continue to grow. Concerns about the privacy and security of that information continue to grow as well. Several states have laws protecting biometric information in one form or another, chief among them Illinois, but the desire for federal legislation remains.

Modeled after Illinois’s Biometric Information Privacy (BIPA), the National Biometric Information Privacy Act (Act), proposed by Sens. Jeff Merkley and Bernie Sanders, contains three key provisions:

  • A requirement to obtain consent from individual prior to collecting and disclosing their biometric identifiers and information.
  • A private right of action against entities covered by the Act that violate its protections which entitles aggrieved individuals to recover, among other things, the greater of (i) $1,000 in liquidated damages or (ii) actual damages, for negligent violations of the protections granted under the law.
  • An obligation to safeguard biometric identifier or biometric information in a manner similar to how the organization safeguards other confidential and sensitive information, such as Social Security numbers.

The Act would apply to “private entities,” generally including a business of any size in possession of biometric identifiers or biometric information of any individual. Federal, state, and local government agencies and academic institutions are excluded from the Act.

Under the Act, private entities would be required to:

  • Develop and make available to the public a written policy establishing a retention schedule and guidelines for permanently destroying biometric identifiers and biometric information. That schedule may not extend one year beyond an individual’s last interaction with the entity, but destruction could be required earlier;
  • Collect biometric identifiers or biometric information only when needed to provide a service to the individuals or have another valid business reason;
  • Inform individuals their biometric identifiers or biometric information is being collected or stored, along with the purpose and length of the collection, storage, or use, and must receive a written release from individuals which may not be combined with other consents, including an employment agreement;
  • Obtain a written release immediately prior to the disclosure of any biometric identifier or biometric information that includes the data to be disclosed, the reason for the disclosure, and the recipients of the data; and
  • Maintain the information using a reasonable standard of care.

Readers familiar with the BIPA in Illinois will find these requirements familiar. Readers familiar with the California Consumer Privacy Act (CCPA) will find the following “Right to Know” familiar as well. The Act would grant individuals the right to request certain information about biometric identifiers or biometric information collected by a covered entity within the preceding 12-month period. This information includes “specific pieces of personal information” and “the categories of third parties with whom the business shares the personal information.” The Act uses “personal information” but does not define it, leaving it unclear if it pertains only to biometric identifiers and biometric information.

Most troubling is the private right of action provision referenced above. The Act uses language similar to the language in the BIPA, which has led to a flood of class action litigation, including a decision by the IL Supreme Court finding plaintiffs need not show actual harm to recover under the law. The legislative process likely will result in some modification to the law, assuming it even survives, a fate privacy laws tend to have at the federal level. Nonetheless, we will continue to monitor the track of this and similar laws.

Pending legislation could create new consumer privacy rights in Massachusetts. Earlier this year, Senator Cynthia Creem presented An Act Relative to Consumer Data Privacy in the Massachusetts Senate. This Consumer Privacy Bill, SD.341, combines key aspects of the California Consumer Privacy Act (CCPA) and Illinois’s Biometric Information Privacy Act (BIPA). This bill would allow Massachusetts consumers a private right of action if their personal information or biometric information (referred to separately in the bill) is improperly collected.

The Consumer Privacy Bill defines “biometric information” as an individual’s physiological, biological or behavioral characteristics, including an individual’s DNA, that can be used, singly or in combination with each other or with other identifying data, to establish individual identity. Biometric information includes, but is not limited to, imagery of the iris, retina, fingerprint, face, hand, palm, vein patterns, and voice recordings, from which an identifier template, such as a faceprint, a minutiae template, or a voiceprint, can be extracted, and keystroke patterns or rhythms, gait patterns or rhythms, and sleep, health, or exercise data that contain identifying information.

The bill defines “personal information” as any information relating to an identified or identifiable consumer. “Personal information” means information that identifies, relates to, describes, is capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular consumer or the consumer’s device.

However, this definition does not include publicly available information or consumer information that is deidentified or aggregate consumer information. Moreover, the bill creates an exception for a business collecting or disclosing personal information of the business’s employees so long as the business is collecting or disclosing such information within the scope of its role as an employer. Therefore unlike California’s CCPA, where the application to employee data remains an open question, under the current text of the Massachusetts bill it is pretty clear that the law would not apply to employee data as defined above. That said, it is still early in the legislative process and the bill could be revised to include employee data.

The pending legislation would require businesses collecting a Massachusetts consumer’s personal information to notify the consumer of the following rights before the point of collection:

(1) The categories of personal information it will collect about that consumer;

(2) The business purposes for which the categories of personal information shall be used;

(3) The categories of third parties with whom the business discloses personal information;

(4) The business purpose for third party disclosure; and

(5) The consumer’s rights to request:

                  (A) A copy of the consumer’s personal information;

                  (B) The deletion of the consumer’s personal information; and

                  (C) Opt-out of third party disclosure.

In addition to this notice requirement, the bill would give consumers a statutory right to request that businesses collecting their personal information disclose to the consumer:

(1) The specific pieces of personal information the business has collected about that consumer;

(2) The sources from which the consumer’s personal information was collected;

(3) The names of third parties to whom the business disclosed the consumer’s personal information; and

(4) The business purpose for third party disclosure.

Businesses would have to make available to consumers two or more designated methods for submitting consumer verified requests for personal information, including, if the business maintains a web site, a link on the home page of the web site. A business receiving a verifiable consumer request generally must provide the requested information within 45 days of receiving the request, but may extend that period once by an additional 45 days, so long as the request for the extension is provided within the first 45-day period. The proposed legislation also creates a consumer right to request that a business delete any personal information collected from the consumer, and the right to opt out of third party disclosure at any time.

The legislation would be enforceable both through a private right of action and by the Massachusetts Attorney General. A consumer could recover damages in an amount not greater than $750 per consumer per incident or actual damages, whichever is greater (for any violation of the act); (2) injunctive or declaratory relief, and (3) reasonable attorney fees and costs. The Attorney General would be authorized to obtain a temporary restraining order or preliminary or permanent injunction against a violation of the Act. In addition, the Attorney General may seek a civil penalty of not more than $2,500 for each violation or $7,500 for each intentional violation.

This Consumer Privacy Bill would impose administrative burdens on businesses, including an obligation to train employees, as well as creating new exposure to damages and penalties. Given the litigation we are seeing under BIPA, businesses collecting Massachusetts consumers’ personal information should monitor the progress of this legislation to determine whether they should begin preparations for complying with yet another consumer privacy provision.

 

In 2018, Delta paved the way in airport terminal development, by introducing the first biometric terminal at the Hartsfield-Jackson Atlanta International Airport where passengers can use facial recognition technology from curb to gate. Delta now offers members of its Sky Club airport lounges to enter using fingerprints rather than a membership card or boarding pass. Other airlines use biometric data to verify travelers during the boarding process with a photo-capture. The photograph is then matched through biometric facial recognition technology to photos that were previously taken of the passengers for their passports, visas, or other government documentation.

Though the use of a fingerprint or facial scan aims to streamline and expedite the travel process and strengthen the security of air travel, it also presents heightened security risks for biometric data on a larger sale. As the use of biometric data increases, the more expansive the effects of the data breach becomes. While it’s possible to change a financial account number, a driver’s license number or even your social security number, you can’t change your fingerprint or your face, easily anyway. Furthermore, in the past, facial recognition software had not been able to accurately identify people of color, raising concerns that individuals may be racially profiled.

Yet, many argue that biometric-based technologies can be used to help solve vexing security and logistics challenges concerning travel. For example, in 2016, Congress authorized up to $1 billion collected from certain visa fees to fund implementation of biometric-based exit technology. That was followed by President Trump’s executive order signed in March 2017 directing the Department of Homeland Security to expedite implementation of biometric entry-exit tracking system for all travelers to the United States. As it stands, we are likely to see a rapid expansion of biometric technology used by airlines and other businesses in the travel industry, so prepare your picture perfect travel face!

Notably, the use of biometric data is growing across all industries and in a variety of different applications – e.g., premises security, time management, systems access management. But, so is the number of state laws intending to protect that data. States such as Illinois, Texas, and Washington are leading the way with others sure to follow. Regulations include notice and consent requirements, mandates to safeguard biometric information, and obligations notify individuals in the event biometric information is breached. And, litigation is increasing. The Illinois Supreme Court recently handed down a significant decision, for example, concerning the ability of individuals to bring suit under the Illinois Biometric Information Privacy Act (BIPA). In short, individuals need not allege actual injury or adverse effect, beyond a violation of his/her rights under BIPA. The decision is likely to increase the already significant number of suits, including putative class actions, filed under the BIPA.

Companies, regardless of industry, should be reevaluating their biometric use practices, and taking steps to comply with a growing body of law surrounding this sensitive information.

Earlier today, the Illinois Supreme Court handed down a significant decision concerning the ability of individuals to bring suit under the Illinois Biometric Information Privacy Act (BIPA). In short, individuals need not allege actual injury or adverse effect, beyond a violation of his/her rights under BIPA, in order to qualify as an “aggrieved” person and be entitled to seek liquidated damages, attorneys’ fees and costs, and injunctive relief under the Act.  Potential damages are substantial as the BIPA provides for statutory damages of $1,000 per negligent violation or $5,000 per intentional or reckless violation of the Act.  To date, no Illinois court has interpreted the meaning of “per violation,” but the majority of BIPA suits have been brought as class actions seeking statutory damages on behalf of each individual affected.

If they have not already done so, companies should immediately take steps to comply with the statute. That is, they should review their time management, point of purchase, physical security, or other systems that obtain, use, or disclose biometric information (any information, regardless of how it is captured, converted, stored, or shared, based on an individual’s retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry used to identify an individual) against the requirements under the BIPA. In the event they find technical or procedural gaps in compliance – such as not providing written notice, obtaining a release from the subject of the biometric information, obtaining consent to provide biometric information to a third party, or maintaining a policy and guidelines for the retention and destruction of biometric information – they need to quickly remedy those gaps.  For additional information on complying with the BIPA, please see our BIPA FAQs.

Companies were hoping that the Illinois Supreme Court would ultimately conclude, consistent with the underlying appellate decision, that in order for a plaintiff to bring a claim under the BIPA (i.e. in order for the plaintiff to be considered “aggrieved”) the plaintiff would have to allege actual harm or injury, and not just a procedural or technical violation of the statute.  In reversing and remanding the case, the Illinois Supreme Court held:

The duties imposed on private entities by section 15 of the Act (740 ILCS 14/15) regarding the collection, retention, disclosure, and destruction of a person’s or customer’s biometric identifiers or biometric information define the contours of that statutory right. Accordingly, when a private entity fails to comply with one of section 15’s requirements, that violation constitutes an invasion, impairment, or denial of the statutory rights of any person or customer whose biometric identifier or biometric information is subject to the breach. Consistent with the authority cited above, such a person or customer would clearly be “aggrieved” within the meaning of section 20 of the Act (740 ILCS 14/20) and entitled to seek recovery under that provision. No additional consequences need be pleaded or proved. The violation, in itself, is sufficient to support the individual’s or customer’s statutory cause of action.

The decision is likely to increase the already significant number of suits, including putative class actions, filed under the BIPA.  In the words of the Illinois Supreme Court, “[c]ompliance should not be difficult; whatever expenses a business might incur to meet the law’s requirements are likely to be insignificant compared to the substantial and irreversible harm that could result if biometric identifiers and information are not properly safeguarded; and the public welfare, security, and safety will be advanced.”

An Illinois nursing home is facing a putative class action lawsuit filed by a worker who argues that the facility’s required fingerprint scan for timekeeping poses a threat to their privacy, and violates Illinois’s Biometric Information Privacy Act (“BIPA”). From July 2017 to October 2017, at least 26 employment class actions based on the BIPA have been filed in Illinois state court and show no sign of slowing.

Although some consider Illinois the leader in biometric data protection, other states have enacted laws similar to the BIPA, and still others are considering such legislation. Companies that want to implement technology that uses employee or customer biometric information (for timekeeping, physical security, validating transactions, or other purposes) need to be prepared. For more information on the nursing home case and advise on how to prepare when collecting biometric information, our comprehensive article is available here.

Below are additional resources to help navigate biometric information protection laws:

Not to be outdone by the recent attention to biometric information in Illinois, and the Prairie State’s Biometric Information Privacy Act (BIPA), Washington enacted a biometric data protection statute of its own, HB 1493, which became effective July 23, 2017.

What it notable about Washington’s new biometric information law?

  • It prohibits “persons” from “enrolling” “biometric identifiers” in a database for a “commercial purpose” without first providing notice, obtaining consent, or providing a mechanism to prevent the subsequent use of the biometric identifiers for a commercial purpose. Lots of definitions, more on that below.
  • The exact type of notice and consent should depend on the context, and notice must be given through a procedure reasonably designed to be readably available to affected individuals. Note that the law does not require notice and consent if the person collects, captures, or enrolls a biometric identifier and stores it in a biometric system, or otherwise, in furtherance of a security purpose.
  • In general, a person that has obtained a biometric identifier from an individual and enrolled that identifier may not sell, lease or otherwise disclose the identifier absent consent. There are, of course, some exceptions, such as the disclosure being necessary to provide a product requested by the individual. In addition, a person generally may not use or disclose a biometric identifier for a purpose that is materially inconsistent with the terms under which the identifier was originally provided.
  • Persons that possess biometric identifiers of individuals that have been enrolled for a commercial purpose must (i) have reasonable safeguards to protect against unauthorized access or acquisition to the identifiers, and (ii) not retain the identifiers for longer than is necessary to carry out certain functions, such as providing the product for which the identifier was acquired.
  • There is no private right of action under the new Washington law. It is to be enforced by the state’s Attorney General. Remember that Illinois’ BIPA does permit persons to sue for violations of that law.

To understand how the law applies, one needs to review the defined terms. For example, the term “biometric identifiers” means:

data generated by automatic measurements of an individual’s biological characteristics, such as a fingerprint, voiceprint, eye retinas, irises, or other unique biological patterns or characteristics that is used to identify a specific individual. “Biometric identifier” does not include a physical or digital photograph, video or audio recording or data generated therefrom, or information collected, used, or stored for health care treatment, payment, or operations under the federal health insurance portability and accountability act of 1996.

The law also defines “commercial purpose” to mean:

a purpose in furtherance of the sale or disclosure to a third party of a biometric identifier for the purpose of marketing of goods or services when such goods or services are unrelated to the initial transaction in which a person first gains possession of an individual’s biometric identifier.

And, the term “enroll” means

to capture a biometric identifier of an individual, convert it into a reference template that cannot be reconstructed into the original output image, and store it in a database that matches the biometric identifier to a specific individual.

The use of biometrics and biometric identifiers in commercial transactions and for other purposes is growing, and so is the number of state laws intending to protect that kind of data. Businesses that use or disclose biometrics in carrying out their business should carefully consider whether this new state law applies and, if so, what they need to do to comply.

Capturing the time employees’ work can be a difficult business. In addition to the complexity involved with accurately tracking arrival times, lunch breaks, overtime, etc. across a range of federal and state laws (check out our Wage and Hour colleagues who keep up on all of these issues), many employers worry about “buddy punching” or other situations when time entered into their time management system is entered by a person other than the employee to whom the time relates. To address that worry, some companies have implemented biometric tools to validate time entries. A simple scan of an individual’s fingerprint, for example, can validate that individual is the employee whose time is being entered. But that simple scan can come with some significant compliance obligations, as well as exposure to litigation as discussed in a recent Chicago Tribune article.

The use of biometric data still seems somewhat futuristic and high-tech, but the technology has been around for a while, and there are already a number of state laws addressing the collection, use and safeguarding of biometric information. We’ve discussed some of those here, including the Illinois Biometric Information Privacy Act (BIPA)which is the subject of the litigation referenced above. Notably, the Illinois law permits individuals to sue for violations and, if successful, can recover liquidated damages of $1,000 or actual damages, whichever is greater, along with attorneys’ fees and expert witness fees. The liquidated damages amount increases to $5,000 if the violation is intentional or reckless.

For businesses that want to deploy this technology, whether for time management, physical security, validating transactions or other purposes, there are a number of things to be considered. Here are just a few:

  • Is the company really capturing biometric information as defined under the applicable law? New York Labor Law Section 201-a generally prohibits the fingerprinting of employees by private employers. However, a biometric time management system may not actually be capturing a “fingerprint.” According to an opinion letter issued by the State’s Department of Labor on April 22, 2010, a device that measures the geometry of the hand is permissible as long as it does not scan the surface details of the hand and fingers in a manner similar or comparable to a fingerprint. But, under BIPA, this distinction may not work in some cases. “Biometric information” means any information, regardless of how it is captured, converted, stored, or shared, based on an individual’s biometric identifier used to identify an individual, such as a fingerprint. As a federal district court explained: The affirmative definition of “biometric information” does important work for [BIPA]; without it, private entities could evade (or at least arguably could evade) [BIPA]’s restrictions by converting a person’s biometric identifier into some other piece of information, like a mathematical representation or, even simpler, a unique number assigned to a person’s biometric identifier. So whatever a private entity does in manipulating a biometric identifier into a piece of information, the resulting information is still covered by [BIPA] if that information can be used to identify the person.
  • How long should biometric information be retained? A good rule of thumb – avoid keeping personal information for longer than is needed. The Illinois statute referenced above codifies this rule. Under that law, biometric identifiers and biometric information must be permanently destroyed when the initial purpose for collecting or obtaining such identifiers or information has been satisfied or within 3 years of the individual’s last interaction with the entity collecting it, whichever occurs first.
  • How should biometric information be accessed, stored and safeguarded? Before collecting biometric data, companies may need to provide notice and obtain written consent from the individual. This is the case in Illinois. As with other personal data, if it is accessible to or stored by a third party services provider, the company should obtain written assurances from its vendors concerning such things as minimum safeguards, record retention, and breach response.
  • Is the company ready to handle a breach of biometric data? Currently, 48 states have passed laws requiring notification of a breach of “personal information.” Under those laws, the definitions of personal information vary, and the definitions are not limited to Social Security numbers. A number of them include biometric information, such as Connecticut, Illinois, Iowa and Nebraska. Accordingly, companies should include biometric data as part of their written incident response plans.

The use of biometrics is no longer something only seen in science fiction movies or police dramas on television. It is entering mainstream, including the workplace and the marketplace. Businesses need to be prepared.

Fingerprints, voice prints and vein patterns in a person’s palm are three examples of biometrics that may be “moving into the consumer mainstream to unlock laptops and smartphones, or as a supplement to passwords at banks, hospitals and libraries,” reports Anne Eisenberg at the New York Times. Of course, these technologies, aimed at increasing security and, to a lesser degree, convenience, raise data privacy concerns and other risks. However effective, convenient, and efficient these technologies may be, companies need to think through carefully their adoption and implementation, particularly in the workplace.

Below are just a few of the kinds of questions companies should be asking before implementing technologies that involve capturing biometric information.  It is likely that such technologies will go mainstream and, if so, spawn new laws regulating the use of biometric information. Thus, companies using such technologies will need to continue to monitor the legal landscape to manage their risks.

Can we collect this information? In some cases, the answer may be no. For example, in New York, Labor Law Section 201-a prohibits the fingerprinting of employees by private employers, unless required by law. However, according to an opinion letter issued by the State’s Department of Labor on April 22, 2010, a device that measures the geometry of the hand is permissible as long as it does not scan the surface details of the hand and fingers in a manner similar or comparable to a fingerprint. Other states may permit the collection of biometric information provided certain steps are taken. The Illinois Biometric Information Privacy Act, for instance, prohibits private entities from obtaining a person’s or customer’s biometric identifier or biometric information unless the person is informed in writing and consents in writing.

If we can collect it, do we have to safeguard it?  Regardless of whether a statute requires a business to safeguard such information, we believe it is good practice to do so. However, states such as Illinois (see above) already require a reasonable standard of care when storing, transmitting or disclosing biometric information.

Is there a notification obligation if unauthorized persons get access to biometric information? In some states the answer is yes.  The breach notification statutes in states such as Michigan include biometric data in the definition of personal information. MCLS § 445.72

Are there any requirements for disposing of this information? Yes, a number of states (e.g., Colorado and Massachusetts) require that certain entities meet minimum standards for properly disposing records containing biometric information.

Can employees claim this technology amounts to some form of discrimination? In addition to securing devices and accounts, biometric technologies also are being used to track employee time and attendance in order to enhance workforce management. These different applications can form the basis of discrimination claims. For example, earlier in 2013, the U.S. Equal Employment Opportunity Commission (EEOC) claimed an employer’s use of a biometric hand scanner to track employee time and attendance violated federal law by failing to accommodate certain religious beliefs which opposed the use of such devices.

Retinal scan technology is another biometric technology that can be used for identification/security purposes.  However, as explained in a recent Biometric.com article, “examining the eyes using retinal scanning can aid in diagnosing chronic health conditions such as congestive heart failure and atherosclerosis…[as well as] diseases such as AIDS, syphilis, malaria, chicken pox and Lyme disease [and] hereditary diseases, such as leukemia, lymphoma, and sickle cell anemia.” Thus, the data captured by such scans can inform employers about the health conditions of their employees, raising a range of medical privacy, medical inquiry and discrimination issues under federal and state laws, such as the Americans with Disabilities Act. 

In recent years, many organizations have installed dashcams in their vehicles to improve safety and compliance, reduce costs, and better understand what’s happening in the field.  Dashcams can be extremely useful for these purposes, giving organizations visibility into risky driver behaviors and misuse of company property.  They can also lower insurance costs and provide valuable evidence in litigation.  To provide these benefits, though, dashcams collect a lot of data—including data organizations didn’t intend to collect and/or that triggers legal obligations they didn’t intend to assume.

Why Organizations Are Using Dashcams

Dashcams serve a number of functions.  For example:

  1. Their use can lower insurance costs.  The video and audio recordings dashcams collect can help favorably resolve disputes and their AI-powered driver behavior monitoring capabilities can help flag risky activity before it results in costly incidents. 
  1. Dashcams can also help organizations monitor compliance with internal policies (e.g., no phone use while driving) and external requirements (e.g., hours-of-service rules in regulated industries).  They also create a record that can be useful in audits or investigations.
  1. When accidents occur, dashcam footage can help clarify fault, rebut inaccurate claims, and, in some cases, prevent litigation altogether or significantly reduce exposure.
  1. Many dashcams now incorporate AI tools that evaluate driver behavior and generate performance scores.  For some organizations, this information influences coaching, discipline, promotion, and compensation decisions. 

The Risks Dashcams Pose

To deliver these benefits, dashcams collect and process significant volumes of data, the management of which can be challenging.  For instance:

  1. In certain jurisdictions, prior consent is required to audio record communications.  Organizations that deploy dashcams without a clear process for obtaining and documenting consent may find themselves out of compliance.
  1. Some dashcams use facial recognition or similar technologies to identify drivers or monitor attentiveness.  Collection of this data can trigger notice and consent obligations—e.g. in California, Colorado, Illinois, and Texas—as well as obligations to maintain reasonable safeguards to protect the data from unauthorized access or acquisition.
  1. Dashcams capture extraneous information, such as employees’ discussions about medical conditions, religious beliefs, sexual orientation, or legal off-duty activities (like drinking or gambling), or the fact that, while using the vehicle, they visited their doctor or attended their AA meeting.  Collection of this information can complicate employment decisions—e.g., by imputing to an employer knowledge of an employee’s protected characteristics—and heighten the risk of invasion of privacy claims.
  1. Dashcams increasingly use AI to evaluate driver behavior or generate performance metrics.  In certain jurisdictions (e.g., California, Colorado, Illinois, New York City), the use of AI-generated performance data may trigger notification, risk assessment, and other compliance requirements. 
  1. Dashcams are typically deployed and managed by third-party vendors, which means the data they collect is often processed outside the employer’s information systems.  Nevertheless, the employer remains responsible for the protection and proper handling of that data.  If the vendor experiences a breach, or misuses the data, impacted employees and/or regulators will likely seek to hold the employer—not just the vendor—accountable.   

How To Manage Dashcam Risk

For many organizations, dashcams are a major value add.  And the good news is that the risks their use presents—though significant—are manageable, provided you have a solid program in place to do so.

Below are some practical steps to consider:

Inventory Your Technology

  • Identify what dashcams are in use across the organization
  • Understand what features are enabled (e.g., video, audio, AI, facial recognition, geolocation tracking, etc.)
  • Confirm the approved use cases

Map the Data

  • What data is being collected?
  • Where is it stored (including vendor environments)?
  • Who has access to it, both internally and externally?
  • How long is it retained?

Address Notice and Consent Requirements

  • Implement clear notice to drivers and passengers
  • Obtain consent where required (e.g., before recording audio or colleting biometric data)

Review AI Use

  • Determine whether AI is being used to evaluate employees
  • Assess whether applicable AI laws impose additional obligations
  • Confirm that outputs are being used appropriately in employment decisions

Update Policies and Training

  • Develop or revise policies addressing dashcam use
  • Train employees on what is being collected and why
  • Provide guidance on appropriate use of company vehicles and equipment

Minimize Data Collection and Retention

  • Disable unnecessary features (e.g., audio, facial recognition) where possible
  • Limit retention periods to what is actually needed
  • Avoid collecting data “just in case it’s useful at some point”

Manage Vendor Risk

  • Conduct diligence on dashcam vendors’ privacy and security practices
  • Confirm where and how data is stored, processed, and transmitted
  • Understand whether the vendor uses data for product improvement, AI training, or other secondary purposes
  • Put clear contractual restrictions in place governing data use, retention, disclosure, breach notification, and risk allocation
  • Require appropriate security controls (e.g., encryption, access controls, incident response obligations)
  • Periodically reassess vendors to confirm ongoing compliance