The Telephone Consumer Protect Act (“TCPA”) has seen lots of action in 2019, and in the final days of the year the Federal Communications Commission (“FCC”) issued a significant ruling concluding that “online fax services” i.e. e-faxes are outside the scope of the TCPA. The FCC’s ruling effectively prevents the common “junk fax” class action lawsuits against companies that send out e-faxes, assuming those faxes are not delivered to a traditional fax machine.

In 2005, the TCPA, which restricts telephone solicitations and use of automated telephone equipment, was amended to include the Junk Fax Prevention Act (JFPA) that restricted the use of the fax machines to deliver unsolicited advertising.

The FCC ruling stems from a 2017 petition by Amerifactors asking the FCC to clarify that faxes sent to “online fax services” are not faxes sent to “telephone facsimile machines”, and therefore do not violate the TCPA. An online fax service is “a cloud-based service consisting of a fax server or similar device that is used to send or receive documents, images and/or electronic files in digital format over telecommunications facilities” that allows users to “access ‘faxes’ the same way that they do email: by logging into a server over the Internet or by receiving a pdf attachment [as] an email.” At the time, Amerifactors was defending a class action suit on claims that it violated the TCPA, where the bulk of messages sent to consumers were from “online fax services.”

In finding that “online fax services” are not considered “telephone facsimile machines” the FCC turned to the plain language of the TCPA. The TCPA’s language demonstrates that Congress did not intend the statute’s prohibition to apply to faxes sent to equipment other than a “telephone facsimile machine”. In addition, the FCC highlights precedent dating back to 2003 that faxes “sent as email over the Internet” are not subject to the TCPA. Faxes sent by online fax services via an attachment that the consumer can delete without printing are effectively the same as “email sent over the Internet”.

Importantly, the FCC notes that faxes sent by online fax services do not lead to the “specific harms” to consumers Congress sought to address in the TCPA.

“The House Report on the TCPA makes clear that the facsimile provisions of the statute were intended to curb two specific harms: “First, [a fax advertisement] shifts some of the costs of advertising from the sender to the recipient. Second, it occupies the recipient’s facsimile machine so that it is unavailable for legitimate business messages while processing and printing the junk fax.” The record is clear that faxes sent to online fax services do not pose these harms and, in fact give consumers tools such as blocking capabilities to control these costs.”

 This ruling is considered a win for businesses, and will likely have a sweeping impact on litigation in this area. Stay tuned for more TCPA related developments in the coming year.

It’s hard to understate the range of issues the California Consumer Privacy Act (the “CCPA”) raises for covered businesses and their service providers. One of those issues involves the meaning of “consumer.” If you have been following CCPA developments, you know that at least for the first 12 months the CCPA is effective, the new law will, to a limited extent, apply to personal information of certain employees, applicants, and contractors. See AB 25.

But what about a covered business’s shareholders? Shareholders may not buy goods and services from the business, and they may not be employees of the business. However, some covered businesses, whether public or private, have shareholders who are natural persons residing in California, and from whom the business collects personal information. For example, businesses might collect personal information from shareholders through their investor relations websites, or the information might be collected on their behalf by third parties. Businesses subject to the CCPA should be considering what steps they need to take with respect to their shareholders or similarly-situated “consumers.”

In general, the CCPA defines “consumer” to mean a natural person who is a California resident. See Cal Civ. Code Sec. 1798.140(g). That definition would seem to include shareholders of the business who are natural persons residing in California. However, there is a question of whether, in their role as shareholders, they would fit under the changes made by AB25.

In general, the changes made by AB25 apply to personal information collected by a business about a natural person in the course of such person acting as a job applicant to or an employee, owner, director, officer, medical staff member, or contractor of that business, and to the extent the person’s personal information is collected and used by the business solely within the context of the natural person’s role or former role as a job applicant to or an employee, owner, director, officer, medical staff member, or contractor of that business.

That is a mouthful, but if shareholders are “owners,” shouldn’t they be covered by AB 25? Not in all cases. For purposes of this section of the law, “owner” means a natural person who either:

  1. Has ownership of, or the power to vote, more than 50 percent of the outstanding shares of any class of voting security of a business.
  2. Has control in any manner over the election of a majority of the directors or of individuals exercising similar functions.
  3. Has the power to exercise a controlling influence over the management of a company.

Shareholders without the ownership, control, or power noted above would not be considered “owners” for purposes of the changes made by AB 25. Additionally, for those shareholders, it does not appear that the “B2B” exception added under AB 1355 would apply. The relevant language in AB 1355 provides:

Personal information reflecting a written or verbal communication or a transaction between the business and the consumer, where the consumer is a natural person who is acting as an employee, owner, director, officer, or contractor of a company, partnership, sole proprietorship, nonprofit, or government agency and whose communications or transaction with the business occur solely within the context of the business conducting due diligence regarding, or providing or receiving a product or service to or from such company, partnership, sole proprietorship, nonprofit or government agency

Shareholders likely would not be engaged in this kind of activity in their role as shareholders.

Last week, the public comment period for the proposed regulations issued in October by Attorney General Xavier Becerra closed, and final regulations are expected shortly. Absent clarification by the Attorney General on whether CCPA obligations reach shareholders of a business, covered businesses should be considering shareholders as part of their compliance efforts.

In response to trends, heightened public awareness, and a string of large-scale data breaches, states continue to enhance their data breach notification laws. In 2017, Maryland amended its Personal Information Protection Act (PIPA) with expansion of the definition of personal information, modification of the definition of “breach of the security of the system,” establishing a 45-day timeframe for notification, and expansion of the class of information subject to Maryland’s data destruction laws. Now, Maryland has again amended PIPA, with HB 1154 in effect from October 1, 2019, notably enhancing the requirements for a business once it becomes aware of a data security breach.

Under PIPA, prior to HB 1154, a business that owns or licenses personal information and that became aware of a data security breach was required to conduct a reasonable, prompt and good faith investigation to determine the likelihood that personal information had been or will be misused as a result of the breach. The new amendment expands the meaning of covered businesses for this purpose to include all businesses that own, license or maintain the personal information of Maryland residents.

That said, if a business that maintains the personal information incurs a breach, it is still the obligation of the business that owns or licenses that information to notify affected residents of the breach. Moreover, if the business that incurs the breach is not the owner or licensee of the personal information, the business may not charge the owner or licensee a fee for providing information that the owner or licensee needs to make a notification.

In addition, a business that owns or licenses personal information cannot use information related to the breach for any purpose other than for:

  • Providing notification of the breach;
  • Protecting or securing personal information; or
  • Providing notification to national information security organizations created for information sharing and analysis of security threats, to alert and avert new or expanded breaches.

New trends and events likely will continue to prompt legislatures to amend their data breach notifications laws. Businesses should develop their incident response plans with flexibility as a key component to ensure compliance with the most current breach notification requirements.

Image result for Form 1040Tax season soon will soon be upon us and many not-so-eager taxpayers will share sensitive personal information about themselves, their dependents, their employees, and others with their trusted professional tax preparers for processing. What many of these preparers might not realize is that federal law and a growing number of state laws obligate them to have safeguards in place to protect sensitive taxpayer data. This can be overwhelming, especially considering tax preparers are already tasked with having to absorb annual federal, state, and local tax law changes, in addition to running their businesses. We hope this post provides a helpful summary of best practices and resources.

Legal Mandates.

  • Federal. The Financial Services Modernization Act of 1999 (a.k.a. Gramm-Leach-Bliley Act) authorized the Federal Trade Commission to set information safeguard requirements for various entities, including professional tax return preparers. The FTC’s Safeguards Rule requires tax return preparers to implement security plans, which should include:
    • one or more employees designated to coordinate an information security program;
    • identifying and assessing risks to client data, along with the effectiveness of current safeguards for controlling these risks;
    • maintaining a written information security program, which is regularly monitored and tested;
    • using vendors that also have appropriate safeguards, and contractually requiring them to maintain those safeguards; and
    • keeping the program up to date to reflect changes in business or operations, or the results of security testing and monitoring.
  • States. A growing number of states have enacted laws and/or issued regulations mandating businesses adopt reasonable safeguards to protect personal information. Small and mid-sized businesses typically are not excluded from these mandates. Some of these states include: California, Colorado, Florida, Illinois, Massachusetts, New York, and Oregon.

Practical next steps.

The good news is that businesses generally are permitted to shape their programs according to their size and complexity, the nature and scope of their activities, and the sensitivity of the customer information they handle. However, a small five-person tax preparation firm should not read this to mean it would be sufficient to obtain a template privacy policy from the Internet, put it on a shelf, and call it a day. Others have tried this.

The Internal Revenue Service (IRS) has issued guidance to help preparers get up to speed. The IRS’ “Taxes-Security-Together” Checklist lists

six basic protections that everyone, especially tax professionals handling sensitive data, should deploy.

These include:

  1. Anti-virus software
  2. Firewalls
  3. Two-factor authentication
  4. Backup software/services
  5. Drive encryption
  6. Virtual Private Network (VPN)

These six protections likely are not enough, other controls include:

  • Train yourself and staff to spot and avoid phishing attacks.
  • Maintain strong passwords (NOT “password” or “123456”!) – generally 8 or more characters, with special and alphanumeric characters, use phrases, etc.
  • Encrypt all sensitive files/emails.
  • Back up sensitive data to a secure external source, that is NOT connected fulltime to a network (If you have been hit with a ransomware attack, you will understand why this is important).
  • Double check return information, especially direct deposit information, prior to e-filing.
  • Only collect, use, retain, and disclose the minimum necessary information needed for the task.
  • Because no set of safeguards is perfect, have an incident response plan and practice it.

Check out IRS Publication 4557 Safeguarding Taxpayer Data for more information on these and other controls, and a helpful checklist from the FTC.

Yes, professional tax preparers that fail to take these steps can expose themselves to an FTC investigation, and a violation of their obligations as Authorized IRS e-file Providers under IRS Revenue Procedure 2007-40. But the impact on your business from a breach of client data can be far worse. The key is to get started and do something.

Several weeks ago, we published a CCPA FAQS on Cookies, which provides a high-level look at how the impending CCPA may apply to website cookies. The CCPA’s definition of personal information is expansive, and in preparation for the CCPA it is easy to overlook certain elements of personal information, in particular website cookies.

A cookie is a small text file that a website places on a user’s computer (including smartphones, tablets or other connected devices) to store information about the user’s activity. Cookies have a variety of uses ranging from recognizing the user when they return to the website to providing the user with advertising targeted to their interests. Depending on their purpose, the website publisher or a third party may set the cookies and collect the information. These cookies may trigger certain data protection obligations.

Recently, the Court of Justice of the European Union (CJEU), EU’s high court, issued an important opinion that addresses website cookie use Bundesverband der Verbraucherzentralen v. Planet49 (C-673/17)The opinion reviewed EU law on the protection of electronic communications privacy and provides clarity on cookie consent requirements. The CJUE’s decision follows several EU developments regarding website cookies and online tracking. In July, the UK Information Commissioner’s Office published an extensive “Guidance on the use of cookies and similar technologies”; the Commission Nationale de L’informatique (CNIL) of France published an updated version of its guidance on cookies; and the CJEU issued an informative opinion (Fashion ID GmbH & Co. KG v Verbraucherzentrale NRW eV (C-40/17)) on data protection issues surrounding the use of social media widgets. It is safe to say to that these developments signal the importance of assessing your businesses’ website cookie usage practices.

Below are several key takeaways from the CJEU opinion on website cookies in Planet49:

  • Consent which a website user must give to the storage of and access to cookies on their equipment is not validly constituted by way of a prechecked checkbox which that user must deselect to refuse their consent. This is required whether or not the information stored or accessed on the user’s equipment is personal data.
  • Consent must be freely given, specific, informed and unambiguous. So the fact that a user selects the button to participate in a promotional lottery (or reads a webpage, watches a video, etc.) is not sufficient for it to be concluded that the user validly gave his or her consent to the storage of cookies.
  • The information that the service provider must give to a user includes the duration of the operation of cookies and whether or not third parties may have access to those cookies.

Cookies and other website tracking technologies pose a unique challenge to businesses as they work to identify the personal information they collect and process. Identifying the presence of these technologies, their function, and the relationship with any third party that places them on the website is essential and requires a greater understanding of the website’s functionality as well as a deeper dive into the business’ analytics, marketing, and advertising practices. In addition, once cookie technologies are identified, businesses should review their existing cookie notice and consent policies as well as website privacy policies to determine if any updates should be made in light of applicable law. Whether the GDPR and e-Privacy Directive, the CCPA, or applicable U.S. state laws apply, organizations that use website cookies should take note. In the event these cookies collect personal data, your organization may be subject to additional data privacy compliance obligations.

 

State and local governments have increasingly become targets of cybersecurity attacks. This year cybersecurity attacks on Baltimore and Lincoln County, North Carolina reportedly will cost those government entities $18.2 million and as much as $400,000, respectively to recover from the attacks. Last year, Atlanta spent more than $7 million to recover from a ransomware attack. A report by cybersecurity firm Coveware shows that governments paid almost 10 times as much money on average in ransom as their private-sector counterparts over the second quarter of 2019.

Recognizing this risk, Massachusetts Governor Charlie Baker announced a new program to help cities and towns develop strategies to prevent cyberattacks. “The more capable the public realm becomes, the greater the challenges and the greater the risks associated with trust,” Baker said. “We need to do things to help.”

During the first Massachusetts Cybersecurity Week, at the state’s third annual Cybersecurity Forum capstone event, Governor Baker introduced an expansive cybersecurity program, including statewide workshops for municipalities to work together to enhance their cybersecurity capabilities, which will be lead by the MassCyberCenter at the MassTech Collaborative.

Governor Baker discussed the “smart” future – a world of smart buildings, autonomous cars and smart communities that is not too far away, and emphasized that states and municipalities need to be prepared. “We have a long way to go in the public sector to digitize our assets. I don’t think that’s a really big surprise to anybody in this room,” Baker said at a recent State House event, addressing a group of 200 executives from the private, public, and R&D sectors.

Baker’s Cybersecurity Program complements a similar program led by the National Governors Association (NGA), announced in July, in which the NGA will collaborate with cyber-related state agencies to help improve cybersecurity strategies in the public sector across the nation. Massachusetts was one of seven states selected by the NGA for the first phase of this program, to help develop an action plan and identify key priorities in cybersecurity.

Cyberattacks continue to be a major risk for private companies as well. Coveware reported that the average size of private companies targeted by ransomware in the second quarter of 2019 was 925 employees. . McAfee Labs reported that ransomware attacks grew by 118% in the first quarter of 2019. Government entities and private companies alike should conduct risk assessments to develop appropriate security measures to protect them from the risk of cyberattacks.

This cybersecurity program is just another example of how Massachusetts continues to lead the way for other states on privacy and security matters. Check out other Massachusetts initiatives discussed on the blog:

 

The Washington State Supreme Court ruled recently that state employees’ birthdates associated with their names are not exempt from disclosure pursuant to a freedom of information records request. In so holding, the Court strictly construed the applicable statute that did not expressly exempt birthdates from disclosure. Wash. Pub. Emps. Assn. v. State Ctr for Childhood Deafness & Hearing Loss. Private and public entities across the country that respond to countless requests for information may want to rethink their approach.

In 2016, the Freedom Foundation (Foundation) sent public records access requests to several state agencies seeking disclosure of records for union-represented employees, including their full names, associated birth dates, and agency work e-mail addresses. Upon reviewing the Foundation’s requests, the agencies determined that all of the requested records were disclosable and indicated that, absent a court order, they intended to release the requested records. Several unions filed motions for preliminary and permanent injunctions to prevent disclosure of the requested records based (among other things) on privacy concerns.

In its decision, the Court stated, “We appreciate the Unions’ concern that disclosing birth dates with corresponding employee names may allow . . . requesters or others to obtain residential addresses and to potentially access financial information, retirement accounts, health care records or other employee records. Yet, we cannot judicially expand the [law’s] narrow exemptions beyond the boundaries set by the legislature, lest we step beyond our interpretive role and risk disrupting the balance of public policies the [law] reflects.”

Significantly, the Court noted that it had long ago defined the “right to privacy” by referring to the common law tort of invasion of privacy through public disclosure of private facts citing, Hearst Corp. v. Hoppe (1978). The State legislature subsequently codified a “right to privacy” as being invaded or violated “only if disclosure of information about the person: (1) Would be highly offensive to a reasonable person, and (2) is not of legitimate concern to the public.”

The Court did go on to acknowledge legitimate concerns about the misappropriation of birth dates that echo the concerns related to Social Security numbers. However, the Court ruled that this does not mean that names and associated birth dates have become private—only that this information is personally identifying. The fact that information is personally identifying, alone, is insufficient to warrant its exemption from disclosure.

Ultimately, the Court noted that the Union’s argument was a policy-based one concerned with the wide abuse of personal identifiers for criminal purposes which was not its to make. While the Court was constrained by the statute at issue that specifically exists for the purpose of allowing the public to obtain information about government, the Court did acknowledge concern generally for the misappropriation of personally identifying information. This concern should be instructive for public and private sector entities alike.

Notably, there has been an increase across the country in state laws that have created or expanded on privacy rights (despite Washington’s failed effort earlier this year to pass the Washington Privacy Act, a European-style data protection law). These laws are expanding the categories of personal information that warrant protection – it is no longer just the Social Security number. When not compelled by law, such as a freedom of information law, public and private entities should consider disclosing only what is minimally necessary to respond to a request with particular attention to data elements that facilitate identify theft.

More than 500 United States schools (connected with 54 different education entities such as school districts and colleges) have been infected with ransomware during the first nine months of 2019, according to a recent report by cybersecurity firm Armor, making the education sector one of the leading ransomware targets, following only municipalities as the top ransomware target. We recently noted in this blog the NYS Education Department’s efforts to combat cyber threats against schools.

In a similar move — appearing to take notice of the continuing surge of cyber-attacks on schools, municipalities and other sectors — the US Senate recently approved the “Department of Homeland Security Cyber Hunt and Incident Response Teams Act of 2019”, bi-partisan legislation that directs the Department of Homeland Security (DHS) to maintain permanent “cyber hunt and incident response teams” to assist both government and private entities in their efforts to prevent and, when necessary, appropriately respond to cybersecurity attacks. To become law, the Act – introduced by Senators Maggie Hassan (D-NH) and Robert Portman (R-OH) — needs to pass the US House of Representatives and be signed by the president.

The Act requires DHS to maintain the cyber hunt and incident response teams for the following purposes:

  • Assisting asset owners and operators in restoring services following a cyber-incident;
  • Identifying potential cyber intrusions and cyber risks to partners;
  • Developing mitigation strategies to prevent, deter and protect against cyber threats; and
  • Providing recommendations to asset owners and operators for improving their network security.

“As cyber threats become increasingly common, it is crucial that everyone from the federal government to local governments … have the resources and support that they need to strengthen their cybersecurity,” Senator Hassan said. “This bipartisan legislation will allow the best minds in cybersecurity to work together to better protect our digital infrastructure and to respond to attacks.”

This is not the first time Senators Hassan and Portman worked together on cybersecurity legislation. In 2018, both the Hack Department of Homeland Security Act (Hack DHS Act) and Public-Private Cybersecurity Cooperation Act (PPCCA) were included in a package of bills that were signed into law.

The Hack DHS Act established a bug bounty pilot program that uses ethical hackers to help identify unique and undiscovered vulnerabilities in the DHS information systems, while PPCCA requires DHS to establishes a disclosure program so that vulnerabilities in DHS’ information systems can be reported and fixed with greater efficiency.

In the coming months, we will watch the House to see how it addresses the Act, and will report in this blog as there is any further movement through the legislative process.

Illinois continues to lead the way in privacy and security legislation. The Prairie State is home to the Biometric Information Privacy Act, first of its kind legislation regulating the collection and possession of biometric information, and also the Personal Information Protection Act, considered one of the more expansive data breach notification laws in the nation. And now, in what has been described as “the momentous legislative session in decades”, the Illinois state legislature unanimously passed the Artificial Intelligence Video Interview Act (“the AIVI Act”), HB2557, which imposes consent, transparency and data destruction requirements on employers that implement AI technology during the job interview process. The AIVI Act, the first state law to regulate AI use in video interviews, will take effect January 1, 2020.

Below are several key obligations the AIVI Act imposes on employers:

  • Notification – The employer must notify the job applicant that AI will be used during the video interview for the purpose of analyzing the applicant’s facial expressions and consider the applicant’s fitness for the position. An applicant must also be provided with an information sheet prior to interview detailing how AI works and the characteristics it uses to evaluate applicants.
  • ConsentEmployers must obtain written consent from any applicant that is evaluated by an AI program. It is worth noting that an employer is not required to consider an applicant that refuses to provide consent for the use of AI.
  • Limitations on AI Use – An employer may not use AI to evaluate applicants who have not consented to the use of AI analysis. In addition an employer may not share applicant videos, except with persons whose expertise is necessary in order to evaluate an applicant’s fitness for a position.
  • Data Destruction – If an applicant requests the destruction of a video interview, the employer must comply within 30 days upon receiving the request. Further, the employer must instruct all persons that have received a copy of the applicant’s video interview to destroy the footage.

The AIVI Act does not contain a “definitions” section, and is vague on several key matters. For example, the law is silent on penalties and enforcement, and there is no definition of AI or guidance on how notification should be provided. AI use in the hiring process is still in its early stages and the AIVI Act will likely be amended as necessary, particularly as the practice becomes more commonplace.

While there is no other state legislation to serve as a comparison, as early as 2014 the EEOC has been taking notice of “big data” technologies and the potential that the use of such technology may be in violation of existing employment laws such as Title VII of the Civil Rights Act of 1964, the Age Discrimination in Employment Act, the American with Disabilities Act and the Genetic Information Nondiscrimination Act. While the EEOC does not yet have an official policy on AI-based tools in the workplace, it has emphasized that the employer must assess the benefits of AI-based tools against increased exposure and risk of privacy and security issues. For more on the EEOC’s stance on AI, check out this interesting podcast episode with Dr. Romella El Kharzazi of the EEOC, “The EEOC and AI Based Assessments – the Inside Scoop” on the podcast Science 4-Hire.

Only time will tell the impact the AIVI Act will have on employment practices. But if the AIVI Act is treated in a similar manner to the BIPA, which the Illinois Supreme Court has held does not require a showing of actual injury to sue, employers should tread carefully with AI usage in the workplace. Moreover, it will likely not be long before other states enact similar legislation. Employers, regardless of jurisdiction, should be evaluating their hiring practices and procedures, particularly to ensure that written consent is obtained before the use of any technology that collects the sensitive information of a job applicant or employee.

On February 21, 2019, California Attorney General Xavier Becerra and Assemblymember Marc Levine (D-San Rafael) announced Assembly Bill 1130 which intended to strengthen and expand California’s existing data breach notification law. On September 11, 2019, the bill passed both houses of the legislature and was presented to Governor Gavin Newsom. Last Friday, October 11, 2019, the Governor signed AB 1130, together with 6 additional California Consumer Privacy Act of 2018 (“CCPA”) related bills into law.

Prior to AB 1130, California’s breach notification law defined personal information in Cal Civil Code Sec. 1798.81.5(d)(1)(A) to include a covered person’s first name (or first initial) and last name coupled with sensitive personal information such as Social Security numbers, driver’s license numbers, financial account numbers, and medical and health information. AB 1130 expands the types of personal information in that section to include biometric information (i.e. fingerprint, retina scan data, iris image) and government identifiers (i.e. tax identification number, passport number, military identification number).

In addition to expanding the elements of personal information that are subject to a notification obligation in the event of a data breach, the change also increases litigation risk following a data breach. This is because, under the CCPA, consumers affected by a data breach can bring an action for statutory damages when the breach is caused by the business’ failure to maintain reasonable safeguards. And, the CCPA specifically incorporates Civil Code Sec. 1798.81.5(d)(1)(A), which AB 1130 expanded. Now, a broader set of personal information that, if breached and not reasonably safeguarded, could expose businesses subject to the CCPA to substantial damages. A consumer can recover damages in an amount not less than $100 and not greater than $750 per incident or actual damages, whichever is greater, as well as injunctive or declaratory relief and any other relief the court deems proper.

Thus, in addition to the costs of notifications a covered business may have to incur under the state’s breach notification law, which could include providing ID theft resolution and credit monitoring services, class action lawsuits brought pursuant to this provision of the CCPA could be very costly. The expansion of the definition of personal information to include biometric information and government identifiers only increases these risks. It would be prudent for businesses subject to the CCPA to ensure reasonable safeguards are in place to protect all of these elements of personal information, and make sure their third-party service providers are doing the same.