The deadline to comply with the first set of requirements under the new DFS Cybersecurity Regulations (“the Regulations”) is here! By today, August 28, 2017, businesses subject to the Regulations must ensure that they:

  1. Designate a Chief Information Security Officer (“CISO”)
  2. Establish a Cybersecurity Program
  3. Develop a Written Cybersecurity Policy.

We have prepared an article and a webinar to help subject businesses gain a better understanding of this first set of requirements. 

As future compliance deadlines approach, we will prepare similar guidance materials. Below, to assist subject businesses to craft their long-term plans, are the future compliance deadlines that the Regulations impose.

Effective Date Requirement
8/28/2017 Cybersecurity Program
Cybersecurity Policy
Chief Information Security Officer (CISO)
2/15/2018 First Annual Certification by Senior Management or Board of Directors
3/1/2018 Penetration Testing and Vulnerability Assessments
  Multi-Factor Authentication
Risk Assessment
Training and Monitoring: Cybersecurity awareness training for all personnel
9/3/2018 Audit Trail
Application Security
Cybersecurity Personnel and Intelligence
Training and Monitoring: Policies that monitor authorized users/ detect unauthorized access or use of nonpublic information
Encryption of Nonpublic Information
Limitations on Data Retention
3/1/2019

Third Party Service Provider Security Policy

 

NOTE: Certain covered entities are exempt from some of the requirements listed above. Please contact the Jackson Lewis attorney with whom you work to confirm whether your business is exempt.

Delaware joins the growing number of states that recently amended their data breach notification law. On August 17th, Delaware amended its data breach notification law with House Bill 180, the first significant change since 2005, effective 240 days after enactment (on or about April 14, 2018). 

Delaware maintains the state law trend of requiring businesses to implement reasonable security measures, expanding the definition of personal information, increasing notification requirements, requiring a risk of harm trigger, and requiring mitigation.

Key aspects of Delaware’s amended data breach notification law include:

  • Maintain Reasonable Procedures and Practices to Protect Personal Information Any “person” subject to the amended law, is now required to implement and maintain reasonable security procedures and practices. The definition of “person” has now been expanded to include any business form, governmental entity, “or any other legal entity”.
  • Expanding the Definition of “Personal Information” – The definition of “Personal Information” was expanded to include: passport number; a username or email address, in combination with a password or security question and answer that would permit access to an online account; medical history, mental or physical condition, medical treatment or diagnosis by a health care professional, or deoxyribonucleic acid profile; health insurance policy number, subscriber identification number, or any other unique identifier used by a health insurer to identify the person; unique biometric data generated from measurements or analysis of human body characteristics for authentication purposes; and an individual taxpayer identification number.
  • Data Breach Notification/Risk of Harm Trigger – Businesses affected by a data breach are now required to give notice to affected state residents “as soon as possible” following the conclusion of an investigation that “misuse of information about a Delaware resident has occurred or is likely to occur”. In addition, the new amendment requires notification within 60 days unless the investigation “reasonably determines that breach of security is unlikely to result in harm to the individuals whose personal information has been breached” or law enforcement has requested a delay in notification.
  • Attorney General Notice – If the affected number of Delaware residents to be notified exceeds 500 residents notice must also be provided to the Attorney General.
  • Credit Monitoring – If the breach of security includes a social security number, the business is now required to offer to each resident, whose personal information was breached or is reasonably believed to have been breached, reasonable identity theft prevention services and identity theft mitigation services at no cost to such resident for a period of 1 year. Both California and Connecticut have similar provisions.

While all states do not currently require reasonable safeguards or credit monitoring, there appears to be a growing trend (which we expect will continue) to include these requirements when breach notification laws are amended. As such, it is imperative for organizations facing a breach to ensure they are applying the most current law.

A New Jersey appeals court recently ruled that a two-year statute of limitations applies to a claim by an HIV-positive patient asserting one of his doctors improperly disclosed his medical status to a third party without consent.  The three-judge Appellate Division panel rejected arguments by the doctor that the suit should be dismissed as time-barred by the one-year statute of limitations typical of defamation claims.

The dispute arose out of a single incident on July 25, 2013, when the patient, given the fictitious name John Smith, was being treated for acute kidney failure by the defendant, who owns a kidney treatment center. Over the course of treatment, the defendant allegedly disclosed Smith’s HIV status to a third party, described as a long time friend of Smith, who was unaware that Smith was HIV-positive.

Nearly two years after the incident, Smith filed suit in Mercer County Superior Court in New Jersey, on July 1, 2015, alleging violations of his common law right to privacy, medical malpractice and wrongful disclosure of his medical status under the state’s AIDS Assistance Act.

The defendant, in his motion to dismiss, argued that the one-year statute of limitations, typical of defamation claims, should apply in this case. Conversely, Smith argued that his claim was more akin to a personal injury or discrimination claim, as opposed to defamation, and thus a two-year statute of limitations should apply.

Superior Court Judge Douglas Hurd agreed with Smith that the defamation statute of limitations was not applicable, and the two-year statute of limited should apply. The defendant appealed the decision.

On appeal, affirming the Superior Court’s ruling, Appellate Division Judge Richard Geiger stated, “Unlike a typical defamation claim, the confidential information allegedly disclosed by [defendant] to the third person was true, not false…The disclosed medical information did not place plaintiff in a false light.” Gieger went on to say, “Patients have a privacy right in their medical records and medical information…We find that the claims for unauthorized disclosure of a person’s HIV-positive status align more closely with discrimination claims.”

The AIDS Assistance Act was passed because the “effective identification, diagnosis, care and treatment of persons” with AIDS was declared by the New Jersey Legislature to be of “paramount public importance.”

Judge Geiger echoed the sentiments of the legislature in his decision stating that it was “strong public policy”, and an “important social goal” to maintain the privacy rights of individuals who are HIV-positive.

The use of artificial intelligence (AI) enabled cybersecurity systems is increasing dramatically. By 2018, sixty-two percent of all companies are projected to use AI technologies.

The use of AI cybersecurity systems provides greater efficiency through automation, the ability to evaluate larger data sets and, in many cases, a faster way to identify the “cyberattack needle in the big data haystack.” For example, some credit card companies use AI systems to scan large data banks for abnormal transactions and evaluate the gravity of a potential large scale cyber threat.

However, companies should not believe that AI systems alone provide a cybersecurity panacea. Cybersecurity solutions always require a human touch, such as risk analysis and case specific strategies for individual cyberattack responses. AI systems can identify potential risk situations, but the question of individual case evaluation and the proper individualized response still can only occur through the use of human analysis and participation.

AI cybersecurity systems should be used to act as a “safety net” in assessing potential large scale risks. In addition, AI systems can be adjusted based on human intervention to determine the differences between malicious attacks and normal behavior that has low risk. Also, it is undisputed that cybersecurity experts expect that criminals will inevitably utilize AI to automate their attacks as well. Because of this anticipated criminal use of AI, fully audited cybersecurity will never, by definition, be possible. AI systems can be well-equipped to increase detections rates, but will always need to be tweaked by human testers to find holes in programs and subsequently fortify AI defenses moving forward.

Bottom line, the use of AI enabled cybersecurity systems should be explored and evaluated, but always used in conjunction with individual human training and response strategies.

The Maryland General Assembly has recently amended its Maryland Personal Information Protection Act, House Bill 974, effective January 1, 2018. Notable amendments expand the definition of personal information, modify the definition of breach of the security of the system, provide a 45-day timeframe for notification, allow alternative notice for breaches that enable an individual’s email to be accessed, and expand the class of information subject to Maryland’s destruction of records laws.

The coming years likely will bring a variety of amendments to state data breach notifications laws. Review our comprehensive discussion of Maryland’s new law, and trends in other state data breach notification laws.

Not to be outdone by the recent attention to biometric information in Illinois, and the Prairie State’s Biometric Information Privacy Act (BIPA), Washington enacted a biometric data protection statute of its own, HB 1493, which became effective July 23, 2017.

What it notable about Washington’s new biometric information law?

  • It prohibits “persons” from “enrolling” “biometric identifiers” in a database for a “commercial purpose” without first providing notice, obtaining consent, or providing a mechanism to prevent the subsequent use of the biometric identifiers for a commercial purpose. Lots of definitions, more on that below.
  • The exact type of notice and consent should depend on the context, and notice must be given through a procedure reasonably designed to be readably available to affected individuals. Note that the law does not require notice and consent if the person collects, captures, or enrolls a biometric identifier and stores it in a biometric system, or otherwise, in furtherance of a security purpose.
  • In general, a person that has obtained a biometric identifier from an individual and enrolled that identifier may not sell, lease or otherwise disclose the identifier absent consent. There are, of course, some exceptions, such as the disclosure being necessary to provide a product requested by the individual. In addition, a person generally may not use or disclose a biometric identifier for a purpose that is materially inconsistent with the terms under which the identifier was originally provided.
  • Persons that possess biometric identifiers of individuals that have been enrolled for a commercial purpose must (i) have reasonable safeguards to protect against unauthorized access or acquisition to the identifiers, and (ii) not retain the identifiers for longer than is necessary to carry out certain functions, such as providing the product for which the identifier was acquired.
  • There is no private right of action under the new Washington law. It is to be enforced by the state’s Attorney General. Remember that Illinois’ BIPA does permit persons to sue for violations of that law.

To understand how the law applies, one needs to review the defined terms. For example, the term “biometric identifiers” means:

data generated by automatic measurements of an individual’s biological characteristics, such as a fingerprint, voiceprint, eye retinas, irises, or other unique biological patterns or characteristics that is used to identify a specific individual. “Biometric identifier” does not include a physical or digital photograph, video or audio recording or data generated therefrom, or information collected, used, or stored for health care treatment, payment, or operations under the federal health insurance portability and accountability act of 1996.

The law also defines “commercial purpose” to mean:

a purpose in furtherance of the sale or disclosure to a third party of a biometric identifier for the purpose of marketing of goods or services when such goods or services are unrelated to the initial transaction in which a person first gains possession of an individual’s biometric identifier.

And, the term “enroll” means

to capture a biometric identifier of an individual, convert it into a reference template that cannot be reconstructed into the original output image, and store it in a database that matches the biometric identifier to a specific individual.

The use of biometrics and biometric identifiers in commercial transactions and for other purposes is growing, and so is the number of state laws intending to protect that kind of data. Businesses that use or disclose biometrics in carrying out their business should carefully consider whether this new state law applies and, if so, what they need to do to comply.

Capturing the time employees’ work can be a difficult business. In addition to the complexity involved with accurately tracking arrival times, lunch breaks, overtime, etc. across a range of federal and state laws (check out our Wage and Hour colleagues who keep up on all of these issues), many employers worry about “buddy punching” or other situations when time entered into their time management system is entered by a person other than the employee to whom the time relates. To address that worry, some companies have implemented biometric tools to validate time entries. A simple scan of an individual’s fingerprint, for example, can validate that individual is the employee whose time is being entered. But that simple scan can come with some significant compliance obligations, as well as exposure to litigation as discussed in a recent Chicago Tribune article.

The use of biometric data still seems somewhat futuristic and high-tech, but the technology has been around for a while, and there are already a number of state laws addressing the collection, use and safeguarding of biometric information. We’ve discussed some of those here, including the Illinois Biometric Information Privacy Act (BIPA)which is the subject of the litigation referenced above. Notably, the Illinois law permits individuals to sue for violations and, if successful, can recover liquidated damages of $1,000 or actual damages, whichever is greater, along with attorneys’ fees and expert witness fees. The liquidated damages amount increases to $5,000 if the violation is intentional or reckless.

For businesses that want to deploy this technology, whether for time management, physical security, validating transactions or other purposes, there are a number of things to be considered. Here are just a few:

  • Is the company really capturing biometric information as defined under the applicable law? New York Labor Law Section 201-a generally prohibits the fingerprinting of employees by private employers. However, a biometric time management system may not actually be capturing a “fingerprint.” According to an opinion letter issued by the State’s Department of Labor on April 22, 2010, a device that measures the geometry of the hand is permissible as long as it does not scan the surface details of the hand and fingers in a manner similar or comparable to a fingerprint. But, under BIPA, this distinction may not work in some cases. “Biometric information” means any information, regardless of how it is captured, converted, stored, or shared, based on an individual’s biometric identifier used to identify an individual, such as a fingerprint. As a federal district court explained: The affirmative definition of “biometric information” does important work for [BIPA]; without it, private entities could evade (or at least arguably could evade) [BIPA]’s restrictions by converting a person’s biometric identifier into some other piece of information, like a mathematical representation or, even simpler, a unique number assigned to a person’s biometric identifier. So whatever a private entity does in manipulating a biometric identifier into a piece of information, the resulting information is still covered by [BIPA] if that information can be used to identify the person.
  • How long should biometric information be retained? A good rule of thumb – avoid keeping personal information for longer than is needed. The Illinois statute referenced above codifies this rule. Under that law, biometric identifiers and biometric information must be permanently destroyed when the initial purpose for collecting or obtaining such identifiers or information has been satisfied or within 3 years of the individual’s last interaction with the entity collecting it, whichever occurs first.
  • How should biometric information be accessed, stored and safeguarded? Before collecting biometric data, companies may need to provide notice and obtain written consent from the individual. This is the case in Illinois. As with other personal data, if it is accessible to or stored by a third party services provider, the company should obtain written assurances from its vendors concerning such things as minimum safeguards, record retention, and breach response.
  • Is the company ready to handle a breach of biometric data? Currently, 48 states have passed laws requiring notification of a breach of “personal information.” Under those laws, the definitions of personal information vary, and the definitions are not limited to Social Security numbers. A number of them include biometric information, such as Connecticut, Illinois, Iowa and Nebraska. Accordingly, companies should include biometric data as part of their written incident response plans.

The use of biometrics is no longer something only seen in science fiction movies or police dramas on television. It is entering mainstream, including the workplace and the marketplace. Businesses need to be prepared.

Recently, the United States Court of Appeals was called upon to determine whether an unsolicited call that did not result in a charge to the consumer violated the Telephone Consumer Protection Act (“TCPA”) and, if it did, was the harm sufficiently concrete to provide plaintiff with standing to sue. Susinno v. Work Out World, Inc. (3rd Cir. July 10, 2017).

In this case, plaintiff alleged that she received an unsolicited call on her cell phone from a fitness company. She did not answer the call and the company left a prerecorded offer on her voicemail lasting one minute. Plaintiff’s complaint asserted that the phone call and message violated the TCPA’s prohibition of prerecorded calls to cell phones. The lower court dismissed the case on defendant’s motion, but the Third Circuit reversed.

On appeal, the defendant argued that the TCPA does not prohibit a single prerecorded call if the phone’s owner is not charged for the call. The appellate court disagreed with the defendant’s statutory interpretation. In addition, the court cited a provision of the TCPA that indicates calls to a cell phone “that are not charged to the called party” can implicate “privacy rights” that Congress “intended to protect” even if the phone’s owner is not charged. Thus, the court ruled, plaintiff established a violation of the TCPA.

With regard to the issue of concrete injury, the court relied upon the U.S. Supreme Court’s recent 2016 decision in Spokeo v. Robins finding that standing to pursue a violation of a federal law requires “concrete injury.” Here, the Third Circuit ruled that plaintiff alleged concrete injury because the injury alleged is the very injury the statute was intended to prevent.

Telemarketers and other businesses are cautioned to comply with applicable provisions of the TCPA and consider seeking counsel before embarking down a questionable path.

Data breach “horror” stories have become a new staple in today’s business environment. The frequency of attacks which threaten (or compromise) the security of business networks and information systems continually increases — in the health care space alone (which holds the dubious honor of Most Likely To Be Attacked), a FBI and HHS’ Office for Civil Rights report notes that ransomware attacks occur at the rate of 4,000 per day, a four-fold increase from 2015. Experienced data breach forecasters continue to predict that cyber-attacks will continue to increase in frequency. Although data security and breach response are constantly in the headlines, studies demonstrate that organizations remain unprepared to effectively respond to a data breach.

For entities that are covered under HIPAA (or their business associates), or other state or federal cybersecurity regulations (such as the NYS DFS regulations we previously discussed in our articles, Getting Prepared for the New York Department of Financial Services’ Proposed Cybersecurity Regulations, and New York Releases Revised Proposed Cybersecurity Regulations) breach response preparedness is required. This would include periodic assessment and development of an effective incident response plan. Breach response readiness is not only required for many organizations, it is just sound business practice in today’s environment.

Is your organization ready? It may have an incident response plan, drafted a couple of years ago, adorning a forlorn shelf (blow the dust off carefully), but perhaps the plan has not been updated or tested, or staff has not been trained (and re-trained) — or legal counsel may not have provided input on the plan.

Legal counsel is valuable not only to provide input on legal definitions, notification processes, and third party contract provisions in the incident response plan. Another important benefit to including legal counsel in the planning process (as well as data breach response) is to ensure that the incident response plan is drafted to appropriately address legal counsel’s role, thereby protecting attorney-client/work product privileges. These protections are not absolute – in fact, there is significant case law discussing how and when they apply. Therefore, legal counsel should be involved in plan development and the plan should clearly provide that investigations are initiated and overseen by legal counsel as part of the breach response (and litigation risk assessment) process.

A May 18, 2017 decision of the United States District Court in the Central District of California underscores the benefits of legal counsel in breach response preparation and planning. In this decision, rendered in the context of the Experian breach litigation, the plaintiffs sought access to a forensic consultant’s report. The forensic consultant had been retained by Experian’s legal counsel immediately after the breach was discovered by Experian, and the report was used by legal counsel to develop a legal strategy for Experian’s response to the breach. The plaintiffs claimed the report should be disclosed because it was also used for the purpose of meeting Experian’s legal duty to investigate the data breach.

Despite the fact that the forensic consultant had previously worked for Experian (doing a very similar analysis), the court noted when the forensic consulting firm was retained by legal counsel, as well as the way legal counsel directed the form and content of the report (so that only portions could be disseminated to Experian’s incident response team, ensuring privilege was not waived), and held that this demonstrated that the report was work product and should not be disclosed to the other side.

The decision discusses another important point – whether the plaintiffs were entitled to disclosure of the report because they would not be able to re-create the investigation of the servers as it was performed on “live” operating networks, and therefore would suffer a substantial hardship. In this case, however, the report was prepared using server images, rather than the live systems. Consequently, the court held that there was no substantial hardship calling for the report to be disclosed.

At Jackson Lewis, our 24/7 Data Incident Response Team is prepared to assist with your planning and ready to assist if (when?) a breach occurs. Our data breach hotline is: 844-544-5296.