Earlier this month, the Federal Trade Commission (“FTC”) issued a report discussing “big data.” The report compiles the agency’s learning from recent seminars and research, including a public workshop held on September 15, 2014. Known best for its role as the federal government’s consumer protection watchdog, the FTC highlights in the report a number of concerns about uses of big data and the potential harms they may have on consumers. However, while the report’s focus is on the commercial use of big data involving consumer data, it also describes a number of issues raised when big data is employed in the workplace.

Used in the human resources context, big data has many useful applications such as helping companies to better select and manage applicants and employees. The FTC’s report describes a study which shows that “people who fill out online job applications using browsers that did not come with the computer . . . but had to be deliberately installed (like Firefox or Google’s Chrome) perform better and change jobs less often.” Applying this correlation in the hiring process can result in the employer rejecting candidates not because of factors that are job-related, but because they use a particular browser. Whether this would produce the best results for the company is unclear.

Likely spurred at least in part by comments made by EEOC counsel at the FTC’s big data workshop in September 2014, the FTC’s report summarizes the potential ways that using “big data” tools can violate existing employment laws, such as Title VII of the Civil Rights Act of 1964, the Age Discrimination in Employment Act, the American with Disabilities Act and the Genetic Information Nondiscrimination Act. The report also includes a brief discussion of “disparate treatment” or “disparate impact” theories, concepts familiar to many employers.

According to the report, facially neutral policies or practices that have a disproportionate adverse effect or impact on a protected class create a disparate impact, unless those practices or policies further a legitimate business need that cannot reasonably be achieved by means that are less disparate in their impact. Consider the application above. Use of a particular browser seems to be facially neutral, but some might argue that selection results based on that correlation can have a disparate impact on certain protected classes. Of course, as the FTC report notes with regard to other uses of big data – a fact-specific analysis will be necessary to determine whether a practice causes a disparate impact that violates law.

Two other concerns discussed in the FTC’s report that have workplace implications include:

  • Biases in the underlying data. Big data is about the collection, compilation and analysis of massive amounts of data. If hidden biases exist in these stages of the process, “then some statistical relationships revealed by that data could perpetuate those biases.” Yes, this means “garbage in, garbage out.” The report provides a helpful example: a company’s big data algorithm only considers applicants from “top tier” colleges to help them make hiring decisions. That company may be incorporating previous biases in college admission decisions. Thus, it is critical to understand existing biases in data as they could undermine the usefulness of the end results.
  • Unexpectedly learning sensitive information. Employers using big data can inadvertently come into possession of sensitive personal information. The report describes a study which combined data on Facebook “Likes” and limited survey information to determine that researchers could accurately predict a male user’s sexual orientation 88 percent of the time, a user’s ethnic origin 95 percent of time, and whether a user was Christian or Muslim 82 percent of the time. Clearly, exposure to this information could expose an employer to claims that its hiring decisions were based on this information, and not other legitimate factors.

Companies can maximize the benefits and minimize the risks of big data, according to the FTC report, by asking the following questions:

  • How representative is your data set?
  • Does your data model account for biases?
  • How accurate are your predictions based on big data?
  • Does your reliance on big data raise ethical or fairness concerns?

There certainly is much to consider before using big data technology in the workplace, or for commercial purposes. As big data applications become more widespread and cost efficient, employers may feel the need to use it to remain competitive. They will need to proceed cautiously, however, and understand the technology, the data collected and whether the correlations work and work ethically.

 

 

You’ve spent extensive time and effort, not to mention money, establishing your company’s reputation only to have the company defamed or disparaged anonymously online. This is a scenario which many organizations face in today’s virtual marketplace. As a recent decision by the Delaware Superior Court illustrates, dealing with these types of issues is often difficult and complicated, especially from a legal perspective.

Late last year, the Delaware Superior Court denied SunEnergy1’s efforts to identify an anonymous poster who allegedly made defamatory comments about SunEnergy1’s business on the website Glassdoor.com. Specifically, SunEnergy1 and two individuals (Plaintiffs) filed a defamation lawsuit in North Carolina Business Court against a former employee and Chief Financial Officer, Jeffery Brown (Defendant). The suit was filed in February 2014 after statements, allegedly made by Brown, were posted on Glassdoor.com on December 15, 2013. The anonymous posting was titled, “This is a terrible place to work” and made a number of unflattering statements about the work environment at SunEnergy1, including labeling the company’s culture as one of “oppression, untruths, and bullying.”

Glassdoor.com is one of several websites where job-seekers can post resumes and employers can advertise career openings. Glassdoor.com describes itself, and as its name implies, as a “free jobs and career community that offers the world an inside look at jobs and companies.” Glassdoor.com also utilizing a rating system which is based on user input and offers a form where users can post reviews about companies listed.

After filing suit, Plaintiffs served an out-of-state subpoena on Glassdoor in Delaware and ultimately filed a Delaware action to compel Glassdoor to identify the anonymous poster’s Internet Protocol (IP) address. In response, Glassdoor filed a motion to quash, arguing the Plaintiffs’ subpoena was overbroad, unduly burdensome, and infringed upon the anonymous user’s First Amendment right to freedom of speech.

The Delaware Commissioner found that, although the subpoena arose from a North Carolina lawsuit, Delaware’s standard for overcoming online anonymity is the correct source of law because the information was being sought from a Delaware company. The Court stated the right to discover the identity of an anonymous author alleged to have made defamatory statements must be balanced against the author’s First Amendment right to free speech and to remain anonymous. In the precedential case Doe v. Cahill, the Delaware Supreme Court held that “[courts] must adopt a standard that appropriately balances one person’s right to speak anonymously against another person’s right to protect his reputation.” In short, under Delaware law, a party seeking to identify an anonymous speaker must make a showing that a civil wrong has been committed.

Applying Delaware law, the Court stated it needed to decide whether any “reasonable person” could have interpreted the statements in the December 15, 2013 review “as being anything other than an opinion”? In making its determination, the Court looked closely at the nature of Glassdoor.com and found it is a website for employment and company evaluation and not a news website where there is an expectation of objective reporting and journalistic standards. Similarly, the Court stated it is not a website where a person would go to find detailed factual information about a company such as earnings reports and SEC filings. Rather, the Court found it “quite evident” that Glassdoor.com is a website where people go to express their personal opinions having worked for a company—not a website where a reasonable person would go looking for objective facts and information about a company. The Court went on to say that it was “readily apparent that the author of this review is unhappy about his or her time at SunEnergy1 and has the proverbial axe to grind—no reasonable person would think otherwise. The fact that the author is a ‘former employee’ who wished to remain anonymous only cements this conclusion.”

In denying Plaintiffs’ motion to compel identification of the anonymous user, and granting Glassdoor’s motion to quash, the Court found that even when viewed in the light most favorable to Plaintiffs, the content of the review was not defamation and was instead nothing more than a rant by a former employee, citing anecdotal evidence, about why he or she thinks it is a terrible place to work.

Unfortunately for employers, or organizations which are similary disparaged, the Court did not consider the potential harm anonymous posts like those at issue here could have on the organization’s reputation. In fact, “opinions” or insights from former employees are exactly why many users frequent such sites.

As the year draws to a close, employer claims under the Computer Fraud and Abuse Act (“CFAA”) against departing employees for stealing or otherwise diverting employer information without authorization to do so are dying slow deaths in many federal courts across the nation. As noted over on the Non-Compete and Trade Secrets Report, the U.S. federal circuits are split regarding whether an employee acts “without authorization” under CFAA when he or she steals employer confidential data at or near termination. The Second, Ninth and Fourth Circuits hold that as long as the employee was permitted to be on a computer for any purpose, diversion of employer information is “authorized” under CFAA. In contrast, the First, Fifth, Seventh, and Eleventh Circuits have adopted a broad construction, allowing CFAA claims alleging an employee misused employer information that he or she was otherwise permitted to access.

Now, in North Carolina at least, employers may have better luck under fighting malevolent employees under the North Carolina statutory corollary to CFAA. In Sprirax Sarco, Inc. v. SSI Eng’g, the Eastern District of North Carolina put teeth into the North Carolina Computer Trespass Act (“NC Computer Trespass Act”) giving employers a new weapon in the fight against trade secret and confidential information misappropriation by departing employees. The NC Computer Trespass act, N.C. Gen. Stat. § 14-458, provides, in relevant part:

(a) . . . [I]t shall be unlawful for any person to use a computer or computer network without authority and with the intent to do any of the following:

(1) Temporarily or permanently remove, halt, or otherwise disable any computer data, computer programs, or computer software from a computer or a computer network. . . .

(3) Alter or erase any computer data, computer program or computer software. . . . [or]

(5) Make or cause to be made an unauthorized copy, in any form, including, but not limited to, any printed or electronic form of computer data, computer programs, or computer software residing in, communicated by, or produced by computer or computer network.

Unlike the CFAA, the NC Computer Trespass Act defines “without authority” clearly. An employee acts “without authority” when either the employee has no right or permission to use a computer, or the employee uses a computer in a manner exceeding the right or permission given by the employer. The United States District Court for the Eastern District held that a departing employee who intentionally used his employer-issued laptop to download vast quantities of computer files to his own media devices and Dropbox account, was acting “without authorization” under the NC Computer Trespass Act. The Court also noted that the former employee also deleted vast quantities of computer files from the Spirax-issued laptop “without authorization” to so.

Spirax provides employers with employees in North Carolina a new tool for protecting corporate information access without the need to tread into the murky waters of the CFAA.

Can we prohibit employees from making audio recordings at work?  As advancements in technology continue to increase, and it becomes easier and easier for employees to surreptitiously record conversations, this inquiry is posed by many employers.  In fact, we discussed this very question back in 2013.  Unfortunately, the answer to this question is perhaps the most often used attorney response  – “Maybe.”  This is especially true given the recent decision from the National Labor Relation Board (NLRB) in Whole Foods Market, Inc. and United Food and Commercial Workers, Local 919 and Workers Organizing Committee of Chicago.  For employers, or those looking to prohibit the use of recording devices, the NLRB’s decision, issued on December 24, 2015, is more akin to coal than an early Christmas present. 

This matter was before the NLRB after the NLRB’s General Counsel filed exceptions to the decision of Administrative Law Judge Steven Davis.  That decision, issued on October 30, 2013, was previously discussed by our labor colleagues.  In his decision, ALJ Davis found that the company’s nationwide policy banning employee recording of workplace “conversations” was lawful.  The policy’s stated purpose was “to eliminate a chilling effect… when one person is concerned that his or her conversation with another is being secretly recorded.”  The prohibition otherwise complements the company’s well-established and pro-active open-door policy.  The ALJ found the company has a legitimate business interest in promoting a culture encouraging employees to “speak up and speak out.”

In his exceptions to the ALJ’s decision, the NLRB’s General Counsel asserted that recording conversations in the workplace is a protected right and that employees would reasonably interpret the rules to prohibit their use of cameras or recording devices in the workplace for employees’ mutual aid and protection.

The NLRB found, contrary to the ALJ, that the rules at issue would reasonably be construed by employees to prohibit Section 7 activity.  The NLRB went on to say that photography and audio or video recording in the workplace are protected by Section 7 if employees are acting in concert for their mutual aid and protection and no overriding employer interest is present.  Specifically, the NLRB stated that such protected conduct may include, for example, recording images of protected picketing, documenting unsafe workplace equipment or hazardous working conditions, documenting or publicizing discussions about terms and conditions of employment, documenting inconsistent application of employer rules, or recording evidence to preserve it for later use in administrative or judicial forums in employment-related actions.

Importantly, the decision does state that the NLRB is not making any findings as to whether particular recordings are concerted and is also not finding that recording necessarily constitutes concerted activity.  Similarly, the NLRB stated they are not holding that all rules regulating recording are invalid.  Rather, the NLRB clarified they only found that recording may, under certain circumstances, constitute protected concerted activity under Section 7 (the dreaded “Maybe”) and that the rules at issue in this matter would reasonably be read by employees to prohibit protected concerted recording violate the National Labor Relations Act.

While mentioned in a footnote to the decision, it is important to note that some states (generally in statutes addressing wiretapping) require all parties to a conversation to consent before that conversation may be recorded.  To overcome these statutory prohibitions on surreptitious recording, the NLRB focused on the broad application of these recording rules to all jurisdictions where the Respondent has locations.  It is unclear whether the NLRB’s decision would have been different if the rules were limited to those states where nonconsensual recording is unlawful.

This decision, along with others by the NLRB and state and federal courts, highlights the difficulties employers face when attempting to prohibit recording or the use of recording devices.  As such, employers interested in implementing workplace rules or policies regarding recording are urged to consider existing legal precedent on this issue, set forth specific legitimate business interests for the prohibition, and consult with counsel before development and implementation.

Earlier this year, we reported that the Internal Revenue Service clarified that it would not consider the value of credit monitoring and other identity protection services provided by employers to employees in connection with a data breach to be taxable income to the employees. IRS Announcement 2015-22. In response to comments, the IRS expanded this tax treatment to apply when employers provide such services before a breach happens. IRS Announcement 2016-02.

In the more recent Announcement, the IRS concludes:

Accordingly, the IRS will not assert that an individual must include in gross income the value of identity protection services provided by the individual’s employer or by another organization to which the individual provided personal information (for example, name, social security number, or banking or credit account numbers). Additionally, the IRS will not assert that an employer providing identity protection services to its employees must include the value of the identity protection services in the employees’ gross income and wages. The IRS also will not assert that these amounts must be reported on an information return (such as Form W-2 or Form 1099-MISC) filed with respect to such individuals. Any further guidance on the taxability of these benefits will be applied prospectively

This is welcomed news for employers looking for ways to help their employees avoid being affected by a data breach, and mitigating the effects should employees become victims of a breach. The employer can provide the services without increasing its federal payroll taxes and employees can receive the services without incurring any additional federal tax liability. Employers and employees will still have to consider any potential state and local tax implications, and should confer with their tax advisors accordingly.

The Announcement states, however, that it does not apply to cash received in lieu of identity protection services, or to proceeds received under an identity theft insurance policy. Thus, for example, the tax treatment of proceeds received under an identity theft insurance policy continues to be governed by existing law.

As a result of this action, and because of how prevalent data breaches have become, it is likely that more employers will be looking to provide data breach monitoring and related services to their employees. While these services would not constitute benefits covered under the Employee Retirement Income Security Act (ERISA), as with other employee benefits, employers will want to carefully select the vendors that will provide the services, and take other steps to incorporate this into their overall benefit offerings.

Are pundits discussing the personal information allegedly accessed by a campaign staffer for Bernie Sanders? No, not really, and that is the point.

Scheduled to debate tonight at St. Anselm College in Manchester, New Hampshire, Democratic presidential candidates Bernie Sanders and Hillary Clinton are almost certain to joust over an alleged intrusion into Clinton’s voter data by a Sanders campaign staffer. According to reports, the staffer accessed confidential voter data maintained by a vendor, NGP VAN, while the firewall protecting that data had been removed. (hmmm…a third party vendor) In response, the Democratic National Committee (DNC) terminated the Sanders campaign’s access to all voter data, including the campaign’s own data. Litigation followed, a deal was reached, but reverberations continue. Turn to your favorite cable news channel.

One hears “data breach” and immediately Social Security numbers, credit card data, or medical information come to mind. In this case, the personal information reported to be involved included names, addresses, ethnicity, and voting history, hardly considered to be sensitive personal information in the United States. In fact, none of the state data breach notification laws would require notification based solely on these data elements. (But see, e.g., FTC settlement involving email addresses). But, some of the information, particularly analytical data concerning voter preferences, can be tremendously helpful to a campaign. So it is easy to see why it is causing such a stir, particularly for the Sanders campaign.

Why is this important beyond presidential politics?

Organizations are beginning to recognize the need for data breach preparedness. This is good – we are seeing more internal teams being assembled and comprised of key stakeholders within organizations. They are meeting, learning and developing data breach response plans including sample investigation checklists and policies, template notification letters, vendor relationships and engaging in tabletop exercises.

Their initial focus, however, is often exclusively on breaches involving personal information that would trigger notification obligations under federal (e.g., HIPAA) and state laws. The Sanders breach and others before it should make clear that these teams need to look beyond Social Security numbers and payment cards and account for data breaches that could initiate an entirely different set of concerns, exposures, considerations and mitigation steps.

If breached, an organization’s proprietary data, internal email communications among executives and management, customer or client data, sales information, and as we are seeing even voter data can have catastrophic consequences for an organization. A breach exposing insensitive email correspondence in the c-suite about customers, or suggesting systemic discriminatory employment practices, or outlining detailed labor management strategies can have significant implications for a company’s market position and workforce management. It can also trigger unwanted litigation and adversely impact the organization’s reputation. Putting data belonging to others at risk also could result in the loss of access to critical business information help by others, as in the Sanders breach. These are only a handful of examples and one need only think about some of the sensitive business information maintained or accessed by their own organizations that is not personal information to understand the effects of a breach of that information.

Organizations cannot prevent all unflattering emails that are sent and received by members of their workforce, they cannot avoid collecting or accessing sensitive business information entirely, nor can they prevent all data breaches from occurring. But they can take steps to be prepared in the event of a breach and in doing so, should consider the broad range of breaches they could encounter. Organizations engaged in data breach response planning, therefore, need to consider a wide range of data breaches that could affect their organizations – those affecting personal information and those affecting other sensitive and critical business information.

On December 17, 2015, following four years of sometimes acrimonious debate, the EU Parliament and Council of the European Union informally agreed on the final draft of the General Data Protection Regulation (“GDPR”). The GDPR will replace what privacy experts refer to simply as “95/48” –or the 1995 law known as EU Data Protection Directive— once officially adopted by the Parliament and Council of the EU. It will go into effect two years from passage.

Multinational companies should use the next two years to begin aligning privacy policies and practices with the principles in the new regulation. Key elements of the GDPR include:

  • One Law/One Rule: Unlike 95/46, which was enacted by EU individual member states, the GDPR applies to all EU member nations and is intended to create more consistency across the EU regarding data protection. A business that operates in more than one member state will now deal only with the Data Protection Authority (“DPA”) in the country where the business is most established. This lead DPA will handle cross border data transfers.
  • Broader Brush: The GDPR is expressly extra-territorial. It applies on its face to data controllers and processors outside the EU where their data processing activities affect EU residents. Also, the definition of “personal data” has been expanded to include information related to a data subject’s physical, physiological, genetic, mental, economic cultural or social identity.
  • Consent Rules: Consent remains a valid basis upon which to process data, though likely not in the employment context. Under the GDPR, consent must be freely given, specific, informed and constitute an unambiguous indication of the data subject’s agreement to the processing of the data subject’s personal data.
  • Data Breach Notification: The GDPR establishes a uniform data breach notification requirement applicable to all data controllers. In the event of a data breach leading to the loss, access or disclosure of personal data, controllers must notify the appropriate DPA “without undue delay,” and, where feasible, within 72 hours. Like many US data breach notification laws, GDPR contains a notice exception where the data is encrypted or where it is unlikely the data subject will be harmed.
  • Required Data Protection Officers: The GDPR requires data controllers and processors to appoint a data protection officer (“DPO”) if the business’s “core activities” consist of regular and systematic data subject monitoring or the processing of sensitive personal data (relating to, e.g., health, ethnicity, trade union membership) or data relating to criminal convictions and offenses.
  • Rules on Data Transfer: Binding Corporate Rules are recognized as the “gold standard” for data transfer. Also, data transfer out of the EU will be allowed where the European Commission has issued a decision affirming the adequacy of the level of data protection in the country where the data is being transferred. DPAs will not have to approve EU Model Contract Clauses, which remain valid under the GDPR.
  • Sanctions: GDPR gives data subjects a private right of action in EU courts. Data subjects will have a right to money damages from either controllers or processors for harm caused by processing personal data. DPAs will have enforcement authority similar to US regulators. A European Data Protection Board will issue opinions, adopt binding decisions and otherwise oversee data protection processes.

When people think about data breaches, they tend think more about the illegal hacking into computer networks by individuals, criminal enterprises or even nation states, than they do about simple employee error. This makes some sense as hacking incidents seem to be more interesting and draw more media attention. Holding this belief, however, can cause many to underestimate the risk of a breach due to the assumption they are not likely to be the target of a hack, and miss altogether the risk of employee error. A recent report by the Wall Street Journal about a survey by the Association for Corporate Counsel may change this.

According to the survey, “employee error” turns out to be the most common reason for a data breach. An example of the kind of employee error mentioned in the survey – “accidently sending an email with sensitive information to someone outside the company” – is something just about all of us have heard about or experienced in our own organizations.

So what does this mean? Well, for organizations that want to minimize the chance of a data breach, they may have to rethink their current strategies. This is particularly true in industries in which more employees are likely to have access to greater amounts of personal information – healthcare, insurance, retail, professional services, etc.

Addressing the risk of “employee error” is difficult…mistakes happen. But there are steps organization can take to minimize the risk. Here are a few:

  • Understand the risks your workforce presents. Addressing data security in an organization often means focusing on its IT infrastructure, with less attention to how employees do their jobs, what information they have access to and why, and whether employees are sufficiently aware of best practices for safeguarding information, among other things. Firewalls, software updates and encryption all are important to a comprehensive information security program, but to address employee error organizations first must understand the roles employees play and the functions they carry out that involve personal information. It is not uncommon for employee mistakes to bypass the IT safeguards, resulting in a data breach.
  • Reevaluate the role of IT. In many organizations, data breach prevention is thought of solely as an IT function. In most cases, this is simply the wrong approach. Data security is an enterprise-wide concern, requiring other stakeholders to have a seat at the table when trying to understand and minimize these risks. Assessing the risks healthcare workers pose to patient data, for example, requires more than an understanding of the level of encryption on the network. Does the worker know when he or she is able to disclose PHI to a family member, other person? Are workers aware of and follow the minimum necessary rule? How are workers using their personal devices, working from home, etc. The IT department is a necessary component for developing a data security plan, but its participation alone is not sufficient.
  • Training, training, training. Organizations and their employees are increasingly challenged by an expanding regulatory and compliance environment – data security is a part of that environment. The absence of adequate training can not only cause the organization to fall short of certain compliance mandates, but it is a missed opportunity to reduce data breach risk. Training ought to reach beyond how to set a good password and the policy on using flash drives. These are important, but training also should remind employees about basic steps they take in the course of their particular job which could trigger a significant breach if they are not careful – e.g., be careful when forwarding email with sensitive attachments, avoid clicking on links in emails, don’t leave boxes with sensitive data laying around, etc.

Obviously, more can be done to minimize the risk to personal data caused by employee error and those steps depend on a range of factors specific to each organization. However, organizations first have to recognize that employee error is a significant risk, and this requires thinking beyond IT-related risks.

The Internet of Things (IoT), as defined by Wikipedia, is the network of physical objects or “things” embedded with electronics, software, sensors, and network connectivity, which enables these objects to collect and exchange data. The IoT allows objects to be sensed and controlled remotely across existing network infrastructure, creating opportunities for more direct integration between the physical world and computer-based systems, and resulting in improved efficiency, accuracy and economic benefit.  Each thing is uniquely identifiable through its embedded computing system but is able to interoperate within the existing Internet infrastructure.

In short, if we look at the objects we use in everyday life – from our phones, to our laptops, to even our copy machines or printers at work – each is able to collect and potentially exchange vast amounts of data.  While the capabilities of these devices and objects to collect data and exchange data will likely improve our daily lives, it is also important to examine how to protect the privacy and security of the information and data which is collected and shared.

As we have previously discussed, the Fixing America’s Surface Transportation Act (FAST Act) includes a number of provisions related to privacy, including an amendment to the Gramm-Leach-Bliley Act (GLBA) as well as the enactment of the Driver Privacy Act of 2015.  Interestingly, the FAST Act also requires a report on the potential of the IoT to improve transportation services in rural, suburban, and urban areas.

Specifically, Section 3024 of Title III, requires the Secretary of Transportation to submit a report to Congress not later than 180 days after December 4, 2015 (the enactment date of the FAST Act).  The report, presumably to address the issues discussed above, is to include (1) a survey of the communities, cities, and States that are using innovative transportation systems to meet the needs of ageing populations; (2) best practices to protect privacy and security, as determined as a result of such survey; and (3) recommendations with respect to the potential of the IoT to assist local, State, and Federal planners to develop more efficient and accurate projections of the transportation.

While it is unclear exactly what information will be captured in the report, it’s clear the drafters of Section 3024 have recognized the importance of data privacy and security while utilizing the IoT to improve transportation.  On a more personal note, I have to believe I am not alone in hoping that the report will finally address (and correct!) the traffic patterns related to my daily commute!

An increasing number of companies have been installing or otherwise using some of the latest monitoring technologies in vehicles driven by employees – whether those vehicles are owned by the company or the employee – usually for safety and/or logistics management. These technologies include “event data recorders” or EDRs that capture a range of information just prior to or during a crash event. Seeking to address privacy concerns for data collected on EDRs, the Driver Privacy Act of 2015 (“Act”) was enacted as part of the Fixing America’s Surface Transportation Act (H.R. 22), signed by President Obama on Friday, December 4, 2015. Companies that have vehicle monitoring programs should review this new law.

To what data does the law apply?

The law applies to any data retained by an EDR installed in a vehicle, and makes clear that the data belongs to the owner of the vehicle or, in the case of a leased vehicle, the lessee of the vehicle in which the event data recorder is installed. It does not matter when the vehicle was made. For purposes of this law, an EDR is defined in 49 CFR section 563.5 and generally means a device or function in a vehicle that records the vehicle’s dynamic time-series data during the time period just prior to or during a crash event, but does not include audio and video data. Installed in nearly all new cars, EDRs capture data elements such as speed, braking, use of a seat belt, and other information.

How does the law safeguard privacy?

The Act provides that data recorded or transmitted by an EDR may not be accessed by a person other than the vehicle’s owner or lessee. There are some exceptions:

  • as authorized by a court or judicial or administrative authority, subject to the standards for admission into evidence required by that court or other administrative authority;
  • if pursuant to written, electronic, or recorded audio consent of the vehicle owner or lessee;
  • to carry out certain investigations or inspections authorized by federal law, subject to limitations on the disclosure of personally identifiable information and the vehicle identification number;
  • to determine the need for, or facilitate, emergency medical response in response to a car accident;
  • for traffic safety research, so long as the personally identifiable information of the owner or lessee and the vehicle identification number is not disclosed.

Are there state laws that apply here as well?

Yes, a number of states already have laws addressing privacy concerns related to information collected on EDRs. The exceptions to the collection of this data vary state to state, but many of those laws require the consent of the owner of the vehicle.

What effects will the Act have on employers?

Most monitoring programs apply to employees operating company-owned vehicles. In those cases, the employer owns or leases the vehicle and is consenting intuitively to accessing the data captured by the EDR. Of course, employers may nonetheless want to inform employees of the monitoring activity, and also have special considerations concerning certain groups in their workforce, including those represented by a union and those operating in other countries.

For those employers whose employees use vehicles that the employees own or lease, accessing EDR data will require the employees’ written, electronic, or recorded audio consent. Many employers are already doing this, particularly in states where this has been required for some time. However, the Act mandates this nationwide.